To reinforce that point: we've got the world's most prominent AI promoting company (MSFT), that has finally realized that Windows Explorer is too slow to start.
And this company, with all the formidable powers of AI behind them, can find no way to optimize that other than pre loading the app in memory. And that's for a app that's basically a GUI for `ls`
> The jobs could themselves become more desirable with machines automating the boring and dangerous parts
Or, as Cory Doctorow argues, the machines could become tools to extract "efficiency" by helping the employer make their workers lives miserable. An example of this is Amazon and the way it treats its drivers and warehouse workers.
That depends on the social contract we collectively decide (in a democracy at least). Many possibilities will emerge and people need to be aware and adapt much faster than most times in history.
In the case of horses and cars, you need the same number of people to drive both (exactly one per vehicle). In the case of AI and automation, the entire economic bet is that agents will be able to replace X humans with Y humans. Ideally for employers Y=0, but they'll settle for Y<<X.
People seem to think this discussion is a binary where either agents replace everybody or they don't. It's not that simple. In aggregate, what's more likely to happen (if the promises of AI companies hold good) is large scale job losses and the remaining employees becoming the accountability sinks to bear the blame when the agent makes a mistake. AI doesn't have to replace everybody to cause widespread misery.
Yes, I understand that it's about saving on labor costs. Depending on how successful this is, it could lead to major changes in the labor market in economies where skilled workers have been doing quite well up to now.
And those humans would be looking for a new job or face other consequences. An AI model can merrily do this with zero consequences because no meaningful consequences can be visited upon it.
Just like if any human employee publicly sexually harassed his female CEO, he'd be out of a job and would find it very hard to find a new one. But Grok can do it and it's the CEO who ends up quitting.
India has not been antagonistic or ambivalent in its recent past, until a Nobel Peace Prize aspirant in the WH decided to take a machete to relations that both countries had been building for the last 25 years, with largely bipartisan support in both countries. Even the current Indian govt is quite pro US until the aspirant tanked that relationship.
And yes, there will be times India doesn't agree with the US, and that's normal. It's seeking to be a partner, not a vassal state.
> India has not been antagonistic or ambivalent in its recent past...
Yep, but stuff can change rapidly.
From 1972-1992 it was China that used to be the pillar of the America's Asia strategy as a bulwark against the USSR, with US soldiers posted in Xinjiang monitoring the USSR [0], US government sponsored tech transfers and scientific collaboration [1], American support for Chinese military modernization [2][3], and expanded economic cooperation [4].
Yet by the late 2000s, that relation degraded into a competitive relationship that has become the cold war that it is today because by the 1990s US and Chinese ambitions became misaligned - especially following US sanctions due to the Tienanmen Massacre [5], Clinton's pivot to newly democratic Taiwan [6], and Chinese attempts at industrial espionage [7].
The US and India are not fully aligned because neither American nor Indian policymakers have significant exposure to either and remain extremely insular (eg. Stanford and Penn are the only American universities with a competitive program on Contemporary Indian politics and foreign policy, and there are only at most 20 American scholars on contemporary Indian policy - it was the same during my time in the early 2010s with regards to China, except instead of Penn it was Harvard), and that's why the US-India relationship has been in a tailspin for the past couple years. The US-India relationship are now in the equivalent position as that of the US and China in the late 1990s to early 2000s era, and are largely predicated on mutual competition against China.
Snafus like the RAW-backed Nijjar assassination as well as the US's support for Asim Munir highlights how the relationship is starting to fray. If alignment is not found within the next few years, the relationship will become competitive and potentially antagonistic in nature because India will start feeling that the US is encircling India just like China, and the US will start viewing India as "rocking the boat".
So the problem was not with the app but with how the information was routed at the back end. The back end of the 1909 system could have been modified to write the data to a central registry as well.
So it is the company prioritising their bottom line at the expense of their customer's computers. More simply, they move cost from their balance sheet and convert it into risk on the customer's end.
Which is actively customer-abusive behavior and customers should treat it with the contempt it deserves. The fact that customers don't, is what enables such abuse.
This is such a weird take. In an online multiplayer game the cheaters are the risk to the company's bottom line.
If a game is rampant with cheaters, honest paying players stop showing up, and less new players sign up. The relatively small percentage of cheaters cost the company tons of sales and revenue.
It is actively in a company's best interest to do everything they possibly can to prevent cheating, so the idea that intentionally building sub-par anti-cheat is about "prioritising their bottom line" seems totally absurd to me.
Not to mention these abstract "the company" positions completely ignore all the passionate people who actually make video games, and how much most of them care about fair play and providing a good experience to their customers. No one hates cheaters more than game developers.
> because most companies will make decisions based on time/effort/profitability, and because client-side anticheat is stupid simple and cheap, that's what they go with. Why waste their own server resources, when they can waste the user's?
And my comment was a response to that statement. In context of that statement, companies are indeed choosing to prioritise their commercial interests in a way that increases the risk to the computers of their customers.
> Not to mention these abstract "the company" positions completely ignore all the passionate people who actually make video games
Irrelevant. Companies and their employees are two different distinct entities and a statement made about one does not automatically implicate the other. Claiming, for example, that Ubisoft enables a consistent culture of sexual harassment does not mean random employees of that company are automatically labeled as harassers.
Coming to anti-cheat, go ahead and fight them all you want. That's not a problem. Demanding the right to introduce a security backdoor into your customer's machines in order to do that, is the problem.
I guess the people in this instance realise they're an essential service for the economy and that without them, a lot of people could actually die. So they probably see their role as being more than simply working for the people of low integrity.
In which case it’s their duty to end it. But I don’t see a million people marching in Washington, I don’t see food deliveries failing to reach the White House. I don’t see airports simply close down.
Again, there's probably a sense of responsibility towards the people moving through the airports who otherwise would be facing much greater risk to their lives.
As a non American who's having to transit the country during this period, I've nothing but respect for the individuals who're actually doing their jobs and keeping everyone (including me) safe without getting a penny for this vital work.
They could go on a strike and bring all airlines to a halt, but as a brown skinned person, I would then be risking a visit to "Alligator Alcatraz" or some other demented place because I failed the leave the country on the day I was supposed to. So again, glad they're not doing that.
In the above scenario, if Claude accidentally wipes out your Jellyfin movies, will Claude deal with consequences (ie an unhappy family/friends) or will you?
That exemption from accountability is a massive factor that renders any comparison meaningless.
In a business scenario, a model provider that could assume financial and legal liability for mistakes (as humans need to do) would be massively more expensive.
And this company, with all the formidable powers of AI behind them, can find no way to optimize that other than pre loading the app in memory. And that's for a app that's basically a GUI for `ls`