Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, LLMs fulfill any goalpost I had in my mind years ago for what AGI would look like, like the starship voice AI in Star Trek, or merely a chat bot that could handle arbitrary input.

Crazy how fast people acclimate to sci-fi tech.





The Mass Effect universe distinguishes between AI, which is smart enough to be a person—like EDI or the geth—and VI (virtual intelligence), which is more or less a chatbot interface to some data system. So if you encounter a directory on the Citadel, say, and it projects a hologram of a human or asari that you can ask questions about where to go, that would be VI. You don't need to worry about its feelings, because while it understands you in natural language, it's not really sentient or thinking.

What we have today in the form of LLMs would be a VI under Mass Effect's rules, and not a very good one.


Note that Mass Effect's world purposely muddles the waters between the two and blurs lines. "Is this a VI or a real AI" is an open question in cases so that the player can explore the idea.

Halo also builds a distinction, with "Smart AI" what we would generally consider AGI and even super AGI, against "Dumb AI" which is purposely limited. Similarly, our current LLMs are similar to "Dumb AI" in shape but not even remotely close in capability.

In both universes, an "AI" or similar system will not hallucinate. If they tell you something wrong, or inaccurate, it's usually because they have been tampered with or because they have "Gone crazy" which is an identifiable state that is not normal and not probabilistic.

Star Trek also makes distinctions. The ships computer for example largely does not make deductions, and doesn't always operate in natural human language but instead requires you use specific phrasing and language. The Star Trek ships computer is basically what using 20 year old text to speech to run Wikipedia and database queries, and that's mostly it. It cannot analyze data itself. Data and the fully conscious Sherlock Holmes are both capable of automatically forming and testing a hypothesis.

It's actually weird how many people don't seem to notice that. The ships computer in star trek is purposely dumb, and command driven. It is not an agent, it does not think, and it does not understand natural human language. We had the star trek ships computer decades ago.


Peter F Hamiltons Sci-Fi novels, do something similar they differentiate between SI (Sentient Intelligence) which is basically their own being, and is not used by people as it would be essentially slavery. And for General Purpose "AI" they use RI which is Restricted Intelligence with strict limits placed around them.

The SI on Peter Hamilton's Commonwealth duology is pretty badass!

This is a great analogy.

The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.


The term AGI explicitly refers to something as smart as us: humans are the baseline for what "General Intelligence" means.

To clarify what I meant, “what we have” means “the AI capabilities we currently have,” not “our intelligence.”

I.e., what I mean is that we don’t have any AI system close to human intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: