Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder which other professions exhibit the same effect?

Artists certainly don't, as art remains art even when the artist themselves becomes lost to time.

I suspect politicians may be, for every problem they do solve becomes the status quo… though also every problem they don't solve becomes their fault, so perhaps not.

Civil engineers may be on the boundary, people forgetting that cities are built rather than natural when complaining about immigrants with phrases such as "we are full", yet remember well enough for politicians to score points with what they promise to get built.



That's a good question, though I think art is one of the few things that is in some sense a parallel. While masterpieces of the past remain masterpieces today, the scope of what is considered art has been expanding, and while perfectly good work is being done today in styles that would have been recognized as art in the past, it probably will not attract the attention it would have gained if it had been produced in the past.

Perhaps the thing that makes AI different from other aspects of computing, in terms of how its progress is regarded, is that the term invites lofty expectations.


> It's long been noticed that every time AI researchers figure out how to do a thing, it goes from "this is impossible SciFi nonsense" to "that's not real AI".

I struggle to see how anything we have today is “AI”.

So you think we’ve done it?

We’ve solved the “AI” problem.

We can just stop working on it now?

> It's weird for me to actually encounter people doing this.

Rather than posturing, perhaps you could provide us with the definition of “AI” so we can all agree it’s here.

> art remains art even when the artist themselves becomes lost to time.

And if statements remain if statements. What’s your point?

> I wonder which other professions exhibit the same effect?

I disagree that there is any “effect” worth pondering, but here’s a biting quote that if written by a tech bro with unsubstantiated zeal for wasting planetary resources to engorge the wealth of unethical sociopaths it would have the word “goalposts” in it, and be the worse for it.

“Fashion is a form of ugliness so intolerable that we have to alter it every six months.” -Oscar Wilde


> I struggle to see how anything we have today is “AI”.

Your struggle is inherent in the "AI effect".

> So you think we’ve done it?

> We’ve solved the “AI” problem.

Calling it "the" is as wrong as calling all medical science "the" problem of medicine.

Replace "AI" with "medicine" and see how ridiculous your words look.

We have plenty of medicine without anyone saying "aspirin isn't medicine" or "heart transplants aren't medicine" or similar, and because nobody is saying that, nobody is saying "oh, so you think we've solved medicine, we can all just stop researching it now?"

So yeah, we've repeatedly solved problems that are AI problems and which people were arguing that no computer could possibly do even as the computers were in fact doing them.

> Rather than posturing, perhaps you could provide us with the definition of “AI” so we can all agree it’s here.

"""It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and discover which actions maximize their chances of achieving defined goals"""

Which is pretty close to the opening paragraph on Wikipedia, sans the recursion of the latter using the word "intelligence" to define "intelligence".


> """It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and discover which actions maximize their chances of achieving defined goals"""

This is exactly why I suggested you define the mental model you’re working with, because now I agree with mannykannot’s first gp addressing your original lament over those you interpret as “moving the goalposts”:

> If we are going to define AI as whatever AI researchers are working on then the only way the goalposts will not move is when they are not making progress.

Your “definition” of “AI” is full of wishy washy anthropomorphisms like “perceive” and “discover”, and elsewhere is so broad it could apply to just about anything.

An astable multivibrator with two LEDs at each output and a switch fits your definition.

The circuit is a “machine” that “perceives” the switch being pressed then “discovers” which side of the circuit will “maximize its chances of achieving the goal” of illuminating the environment.

> We have plenty of medicine without anyone saying "aspirin isn't medicine" or "heart transplants aren't medicine"

People do in fact say blood letting the humors “isn’t medicine”.

So you’re interpretation of “medicine” is yet another common example that also exhibits your apparently super rare “AI effect”?

But this attempt at analogy is just a distracting digression.

You appear to be changing the rigidity of your definition of “AI” ad hoc to satisfy whatever argument you’re trying to make in that moment.

In this thread alone you refer to products, the “aspirins”, as “AI”, but then claim your definition is that “AI” is a “field of research”, your “medicine”.

Take the press release from the product being discussed here and replace “AI” with “field of research”.

“Corpo just released a ‘field of research’ for your PC.”

Starting to “see how ridiculous your words look” yet?


> Your “definition” of “AI” is full of wishy washy anthropomorphisms like “perceive” and “discover”, and elsewhere is so broad it could apply to just about anything.

Neither "perceive" nor "discover" is an anthropomorphism. Not only on a technicality such as "anthropomorphism excludes animals, so you would be saying you think animals can't do those things" (I wouldn't normally bring it up, but you want live by the technical precision sword, you get this) — but also for the much more important point that examples such as a chess engine doesn't need to have eyes, neither does a chatbot, nor even a robot: they only need an input and an output.

Definitionally all functions have an input and an output, and if you insist on mathematically precise formulation of "discover" I can rephrase that statement without loss as:

"AI is the field of research for how to automatically create a best-fit function f(x) to maximise the expected value of some other reward function r(f(y)), given only examples x_0 … x_n".

Any specific AI model is thereby some f() produced by this research.

> An astable multivibrator with two LEDs at each output and a switch fits your definition.

No, it doesn't, there's no "discover" in that.

And at this point, I could replace you with an early version of ChatGPT and the prompt "give a deliberately obtuse response to the following comment: ${comment goes here}"

> People do in fact say blood letting the humors “isn’t medicine”.

> So you’re interpretation of “medicine” is yet another common example that also exhibits your apparently super rare “AI effect”?

Those things, which are your examples not mine, were actively shown to not work, and this was shown by the field of research called medicine, so no.

(Also, "apparently super rare" is putting words into my mouth and wildly misrepresents "I wonder which other professions exhibit the same effect?").

Again, what you write here is so wildly wrong that I have to assume that your brand new account is either a deliberate trolling attempt, or ChatGPT session with a prompt of "miss the point entirely" — but of course, I have met humans who are equally capable of non-comprehension before such tools were available. (I think those humans were doing arguments as soldiers, but it's hard to be sure).

> You appear to be changing the rigidity of your definition of “AI” ad hoc to satisfy whatever argument you’re trying to make in that moment.

You asked for a definition, you got one, you complained about the definition. That's you being loose.

I have a fixed definition, and am noting how other people change theirs to always exclude anything that actually exists. Which you are doing, which is you being loose.

What would be shifting track, would be to use the observation that you are hard to distinguish from an LLM to introduce a new-to-this-thread definition of AI — but I'm going to say that Turing can keep the imitation game, because although his anthropomorphic model of intelligence has its uses, I view it as narrow and parochial compared to the field as a whole.

> In this thread alone you refer to products, the “aspirins”, as “AI”, but then claim your definition is that “AI” is a “field of research”, your “medicine”.

No.

Aspirin is a medicine, an example of a product of the field of research which is medicine.

The equivalence is "[[aspirin] is to [medical research]] as [[route finding] is to [AI research]]". One can shorten "I perform medical research" into "I work in medicine" and not be misunderstood, you are misunderstanding the contraction of "this is an AI algorithm" to "this is AI".

You are the one dismissing the existing solutions in the field of AI and sarcastically suggesting that anyone who says otherwise thinks we've solved all AI problems and can stop researching it now — which is as wrong as dismissing aspirin as "not a medicine" and sarcastically suggesting that anyone who says otherwise thinks "we've solved all medical problems and can stop researching it now".

> Take the press release from the product being discussed here and replace “AI” with “field of research”.

> “Corpo just released a new ‘field of research’ for your PC.”

I see you're unfamiliar with the entire history of scientific research software, too.


> You asked for a definition, you got one, you complained about the definition. That's you being loose.

Wait, I’m trying to help someone who thinks that giving an idiosyncratic definition of a broadly used term when someone asks for one requires that the provided definition be applied universally and accepted as correct without scrutiny?

> AI is the field of research for how to automatically create a best-fit function f(x) to maximise the expected value of some other reward function r(f(y)), given only examples x_0 … x_n

Cool, cool another ad hoc change. At least this one is more precise.

> I see you're unfamiliar with the entire history of scientific research software, too.

Project much?

https://ai100.stanford.edu/2016-report/section-i-what-artifi...

Read up on: Heuristic Search, Computer Vision, Natural Language Processing (NLP), Mobile Robotics, Artificial Neural Networks, and Expert Systems.

These fields of research either lack reward functions all together, or did so in earlier iterations.

> And at this point, I could replace you with an early version of [redacted]

> or [redacted] session with a prompt of "miss the point entirely"

Oh, I see why you are so passionate about these products now.

You can replace or dismiss anyone who points out your shortcomings with them.


> Wait, I’m trying to help someone who thinks that giving an idiosyncratic definition of a broadly used term when someone asks for one requires that the provided definition be applied universally and accepted as correct without scrutiny?

That's not even a coherent English sentence.

> Cool, cool another ad hoc change. At least this one is more precise.

You asked for it. It's identical in meaning. There's nothing "ad hoc" about this in either the original or the precise form.

> Artificial Neural Networks … lack reward functions

Deeply and fundamentally wrong.

But worse than that, the thing you linked to actively denies your own prior claim, which was (and I'm copy-pasting) "I struggle to see how anything we have today is “AI”", even though that quotation is a statement about your own beliefs.

Even by trying to use that source you are engaging as arguments-as-soldiers, using something that contradicts your own other points.

> Oh, I see why you are so passionate about these products now.

I've heard much the same projection about my inner state from an equally wrong Young Earth Baptist. Like him, you have yet to demonstrate any understanding.

I've been interested in this since back when NLP couldn't understand the word "not", and back when "I write AI" implicitly meant "for a computer game".


> But worse than that, the thing you linked to actively denies your own prior claim, which was (and I'm copy-pasting) "I struggle to see how anything we have today is “AI”", even though that quotation is a statement about your own beliefs.

This is the insidiousness of your ad hoc definition flip flopping. You can claim you meant one and I’m addressing the other, and vice versa, whenever it suits you.

The linked article is about your “field of research” definition of the term “AI” while the quote from my original reply is addressing your “product goalposts” definition.

> It's identical in meaning.

Well I’ve tried but am unable to find a dictionary that defines “discover” as relying on reward functions. Can you link me to one?

Here’s a popular dictionary’s definition: https://www.merriam-webster.com/dictionary/discover

>> Artificial Neural Networks … lack reward functions

Nice, you removed the pertinent context of “or did so in earlier iterations” with that ellipsis to make your point. Nice.

> Deeply and fundamentally wrong.

Read up on Minskys SNARCs which used trial and error learning without the need for a predefined reward function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: