Hacker Newsnew | past | comments | ask | show | jobs | submit | krashidov's commentslogin

Do they still serve ads when you click the Start button?

They serve ads in notifications. Of course start still has them. (Work computer can't go to Linux, so stuck witnessing the mess)

We're building this at type.com. Ideally one day we want to build the next gen protocol so that we're not searching for yet another communications platform, but it's going to take a while for chat to stabilize with all the generative UI and agentic stuff we're building. We're even talking about open sourcing it.

With regards to the specific complaints about not owning your data, we're building the product so that you own your data and you can run your agents and read your messages however often you want. Obviously when we build a platform and others build 3rd party apps we will have to have some restrictions so it'll be a steady balance in the future


> WebAssembly failed to keep some of its promises here

classic case of not using an await before your promise


The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.

The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop


I often find myself wondering about these things in the context of star trek... like... could Geordi actually code? Could he actually fix things? Or did the computer do all the heavy lifting. They asked "the computer" to do SO MANY things that really parallel today's direction with "AI". Even Data would ask the computer to do gobs of simulations.

Is the value in knowing how to do an operation by hand, or is the value in knowing WHICH operation to do?


This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.

Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.

10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.


> New languages and technology will be derivatives of existing tech.

This has always been true.

> There will be no React successor.

No one needs one, but you can have one by just asking the AI to write it if that's what we need.

> There will never be a browser that can run something other than JS.

Why not, just tell the AI to make it.

> And the reason for that is because in 20 years the new engineers will not know how to code anymore.

They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.

Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.


  > Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.

  > The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.

Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.

With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.


> You don't exactly need to use PLs to clarify an ambiguous requirement

I agree, I guess what I'm trying to say is that the only reason we've called constructed languages "programming languages" for so long is because they've primarily been used to write programs. But I don't think that means we'll be turning to unambiguous natural languages because what we've found from a UX standpoint it's actually better for constructed languages to be less like natural languages, than to be covert natural languages because it sets expectations appropriately.

> you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.

We’ve tried that and it sucks. COBOL and descendants also never gained traction for the same reasons. In fact proximity to a natural language is not important to making a constructed languages good at what they're for. As you note, often the things you want to say in a constructed language are too awkward or verbose to say in natural language-ish languages.

> terms & conditions pages, which people always skip because they're written in a "legal language"

Legalese is not unambiguous though, otherwise we wouldn’t need courts -- cases could be decided with compilers.

> using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.

When there is a cost per token then it become very important to say everything you need to in as few tokens as possible -- just because it's possible doesn't mean it's economical. This points at a mixture of natural language interspersed code and math and diagrams, so people will still need to read and write these things.

Moreover, we know that there's little you can do to prevent writing bugs entirely, so the more you have to say, the more changes you have to say wrong things (i.e. all else equal, higher LOC means more bugs).

Maybe the LLM can write a lower rate of bugs compared to human but it's not writing bug-free code, and the volume of code it writes is astronomical so the absolute number of bugs written is probably also enormous as well. Natural language has very low information density, that means more to say the same, more cost to store and transmit, more surface area to bug check and rot. We should prefer to write denser code in the future for these reasons. I don't think that means we'll be reading/writing 0 code.


That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.

However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.


Will they be willing to offer their content for training AI models?


Probably not.

We have witnessed, over the past few years, an "AI fair use" Pearl Harbor sneak attack on intellectual property.

The lesson has been learned:

In effect, intellectual property used to train LLMs becomes anonymous common property. My code becomes your code with no acknowledgement of authorship or lineage, with no attribution or citation.

The social rewards (e.g., credit, respect) that often motivate open source work are undermined. The work is assimilated and resold by the AI companies, reducing the economic value of its authors.

The images, the video, the code, the prose, all of it stolen to be resold. The greatest theft of intellectual property in the history of Man.


The greatest theft of intellectual property in the history of Man.

Copyright was always supposed to be a bargain with authors for the ultimate benefit of the public domain. If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.

You can argue for compromise -- for peaceful, legal coexistence between Big Copyright and Big AI -- but that will just result in a few privileged corporations paywalling all of the purloined training data for their own benefit. Instead of arguing on behalf of legacy copyright interests, consider fighting for open models instead.

In a larger historical context, nothing all that special is happening either way. We pulled copyright law out of our asses a couple hundred years ago; it can just as easily go back where it came from.


>If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.

Going forward? Okay, sure. But people created all of the works they created with the understanding of the old system. If you want to change the deal, then creators need to know that first so they can decide if they still want to participate

Allowing everyone to create everything and spend that labor with the promise of copyright, and then pull the rug "oops this is just too important" is not fair to the people who put in that labor, especially when the people redefining the arrangement are getting 100% of the value and the creators got and will get nothing


Life isn't fair, and 100+ year copyright terms enforced eternally with unbreakable DRM sure as hell aren't.

But open-weight LLMs are a pretty decent compromise.


There is one missing factor in your argument. The wealth transfer. The public was almost never the beneficiary of copyright and other IPs. Except perhaps its earliest phases where the copyright had a strict term limit, it was always the corporations who fought for it (Disney being the most infamous), using it to prevent the public from economically benefitting from their work almost forever.

And then people found a way to use the same copyright law to widely distribute their work without the fear of losing attribution or being exploited. Here comes along LLMs that abuse the 'fair use' argument to break attribution and monetize someone else's work. Which way does the money flow? To the corporations again.

IP when it suits them, fair-use when it benefits us. One splendid demonstration of this hypocrisy is how clawd and clawdbot were forced to rename (trademark law in this case). By twisting and reinterpreting laws in whatever way it suits them, these glorified marauders broke a trust mechanism that people relied on for openly sharing their work.

It incentivices ordinary people to hide their work from public. Don't assume that AI is going to solve that loss. The level of original thinking in LLMs is very suspect, despite the pompous and deceitful claims by its creators to the contrary. Meanwhile, the lack of knowledge sharing and cooperation on a global scale will throw civilizational growth rate back into the dark ages. Neither AI, nor corporations are yet anywhere near the creativity and original thinking as the world working together. Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority, at the cost of losing the benefit of the internet (knowledge sharing) and an enormous damage to the environment - all of which actively harm the public.


Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority

Including the ones I can run on my own PC at home? I couldn't do that before. Maybe I'm the greedy minority, but I'm stronger and (at least intellectually) wealthier than I was before any of this started happening.

Qwen 3.5, which dropped yesterday, is a genuine GPT 5-class model. Even the ones released by US labs such as OpenAI and Allen AI are legitimate popular resources in their own right. You seem to feel disempowered, while I feel the opposite.


Yes, even the ones you can run on your system. They're no different from proprietary OS and software you used to run on your system, whose design in which you had no say whatsoever. These 'free to run' models are hardly open source. You don't have the data that was used to train them. It's not just about the legality of those data. The dataset chosen may have extreme bias that you can never eliminate satisfactorily from a trained model.

As if that wasn't bad enough, these models cannot be trained on your regular home computer. But instead of striving to improve the energy efficiency of these models, those big corporations build and run massive gas guzzling data centers to train them. They ruin the quality of life for the neighbors through pollution, water depletion and electricity price rise. It also disproportionately affects the poor in the world by reducing supply of essential computing components like RAM (which are needed for medical devices, utility and manufacturing installations and every other aspect of modern life), and by aggravating the climate crisis, whose victims are the poorest.

They don't give you those models out of the goodness of their hearts. Those are just advertisements and trial pieces for their premium services. They also peddle the agenda of its creators. So yes, those models are empowering only in a very narrow sense without any foresight. They are still the money making engines for the rich that subject you to their benevolence, whims and fancies.


    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

    -- Frank Herbert, Dune


Eh, we already have a name for the concept of living by plausible-sounding works of fiction: religion.

Yet another post who misses (or chooses to overlook) my point: this stuff is running on my machine. "Seizing the means of production" means going into my back room and pulling a computer out of a rack.


Alibaba (China) thinks for you. They control you, to some extent.

Wikipedia: "Qwen (also known as Tongyi Qianwen, Chinese: 通义千问; pinyin: Tōngyì Qiānwèn) is a family of large language models developed by Alibaba Cloud. Many Qwen variants are distributed as open‑weight models under the Apache‑2.0 license, while others are served through Alibaba Cloud. Their models are sometimes described as open source, but the training code has not been released nor has the training data been documented, and they do not meet the terms of either the Open Source AI Definition or the Model Openness Framework from the Linux Foundation."


Oh, no

The Linux Foundation is coming for me

Well, anyway, where were we


This isn't a hypothetical or fictional problem. This is a well-known and well-warned problem that we already see in action. How many pro-China biases have the Chinese models show? How often does Groq do whatever it wants? (Including calling itself Mecha-Hitler and undressing people, including minors for fun!) How many times have nearly every model taken pro-oligarch stances (eg: refusing to draw Mickey Mouse even after its copyright expired.) How many people, including kids were driven to suicide by some of the models?

There is no end to the examples of how it harms ordinary people. And yet, you decide to just hand wave away those concerns as if those don't exist for you or the others. There is no debate when all you do is ignore the counter arguments. It's like those science deniers who stick to their beliefs, no matter how much evidence is presented.


Don't get me wrong, I'm interested in the Chinese models only to the extent that their weights are available. I hope DeepSeek 4 sees the light of day on HuggingFace, but a lot of wealthy peoples' oxen are being gored and I suspect that it'll be the last we get if it is released at all.

If I want to see Mickey Mouse or any number of copyrighted Hollywood figures, Z-Image Turbo and HunyuanImage-3 will gladly oblige. The Chinese models may be biased to deny Taiwanese self-rule, and they may change the subject when you ask about the Tiananmen Square massacre... but they do work, and as of the Qwen 3.5 release they work well enough to be used by people at home who don't have a rack of H200s in the basement.

The most important thing about the Chinese models is that they will still be there on my hard drive 20 years from now. No additional censorship beyond what they shipped with, which (being a Westerner) is largely in areas I don't care about. No rug pulls, unwanted updates, usage limits, or price increases. No ablation of whatever subjects are deemed politically incorrect in the future. No ads. No spying. No realignment with the sayings of Chairman Musk.

As for suicide, that is a silly mediagenic exercise in blaming inanimate tools for the actions of mentally-ill people and the inaction of negligent parents. I don't consider it a valid or relevant counterargument, so yes, I'm going to hand-wave away your concerns in that area.


Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.


There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.

We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.


Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.

On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.

Also on the list to knock out, as we're not impressed with Wiz's resource usage.


AI will finally rewrite everything in Rust.


What a blunder by Anthropic. We'll see what openclaw turns into and if it sticks around, but still a huge and rare blunder by anthropic


i dont think so, its trivial to spin up an openclaw clone. the only value here is the brand


I highly suspect he might even consider Anthropic since they enforced restrictions at some point on OpenClaw form using there APIs


yes that's the blunder I'm talking about


I am sure they made a bid. The blog makes it sounds like he talked to multiple labs.


they're (Anthropic) also the ones who have been routinely rug-pulling access from projects that try to jump onto the cc api, pushing those projects to oAI.


Do you have any references for that?

AFAIK Anthropic won't let projects use the Claude Code subscription feature, but actually push those projects to the Claude Code API instead.


I'd like a reference for it being rug pulling. What happened with OpenCode certainly wasn't rug pulling, unless Anthropic asked them to support using a Claude subscription with it.


The carbon in our atmosphere is already in the atmosphere and it won't go away. So there really is nothing more you can do other than take it out of the air and store it somewhere for as long as you can. Trees are a good way to store it until we have better technology/can handle climate change better


No. Pick a timeline that is influential, short or long. If it’s long, trees don’t capture carbon. Not in any scenario of population growth, which inevitably leads to some edgelord reductionist “maybe we should all die then for what we’re doing to Gia!” trite.

This “climate is a 100 years” thing while using ice core samples to make your case is not in support of science. It is in support of politicians.

The latter I’m personally getting sick of. And the people that can separate them, harm the former.


> Not in any scenario of population growth

We're not really in a scenario of population growth - there is a direct correlation between being relatively well off and having less kids. Basically every developed nation hit peak population a long time ago, and the faster we pull other nations out of extreme poverty, the sooner their populations will start falling too

Even the most fatalistic estimates have world population peaking at 10 billion.


Yes, and the scary thing is that soon the atmospheric carbon PPM will be high enough to start affecting how we think, act, and feel on a day to day basis.


Surprisingly, no. Humans adapt to higher CO2 concentrations over a period of days to weeks. Submarines run as high as 5000ppm, which is way above normal atmospheric concentration.[1] Many indoor environments are above 1000ppm.

This seems to be like high altitude adaptation. It's going back and forth between concentrations that causes problems at moderate concentrations. The adaptation doesn't happen.

[1] https://www.nationalacademies.org/read/11170/chapter/5#51


congrats on the launch! we're building type.com and we would love to use this - shoot me an email: k at type dot com

our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)


Thank you. I just sent you an email. Looking forward to learning more about what you are building.



love the shout but git-ai is decidedly not trying to replace the SCMs. there are teams building code review tools (commercial and internal) on top of the standard and I don't think it'll be long before GitHub, GitLab and the usual suspects start supporting it since folks the community have already been hacking it into Chrome extensions - this one got play on HN last week https://news.ycombinator.com/item?id=46871473


yep i know it's not meant to be an SCM tool but I thought it was somewhat related to what they're doing right now:

"Entire CLI hooks into your git workflow to capture AI agent sessions on every push."

Which is capturing the LLM convo along with the code (I could be wrong ofc)


Spinning up temporary VMs/stateful machines is going to be super valuable in the next year or 2. Heroku not jumping on this just shows the state of Salesforce. Absolutely inept. I foresee slack going down a similar path of enshittification


Thanks for reminding me that slack is owned by Salesforce. What are we going to use when slack turns to shit, IRC again maybe?


type.com (full disclosure I am one of the cofounders)


You gotta fix the home page. I'm not sitting through a movie trailer to find out how your chat app works.


haha we actually launched it today. Point taken though! It's not a video though just an interactive widget. Instead of scrolling you press enter. Once we launch I'm sure we'll have something more traditional


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: