I worked on a product that had to integrate with Salesforce because virtually all of our customers used it. It must have been a terrible match for their domain, because they had all integrated differently, and all the integrations were bad. There was virtually no consistency from one customer to next in how they used the Salesforce data model. Considering all of these customers were in the same industry and had 90% overlapping data models, I gave up trying to imagine how any of them benefited from it. Each one must have had to pay separately for bespoke integrations to third-party tools (as they did with us) because there was no commonality from one to the next.
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
In the west, we've had a long, deep split between what ordinary people rely on (religion and self-help) and respectable academic philosophy. Philosophy rooted in religion has a strict requirement to scale down to serve masses of people. Philosophy rooted in academia has a strict requirement to scale up to allow practitioners to flex their elite skills and show that they are worthy of scarce academic positions. Academic philosophers pay lip service to the idea that philosophy can and should be for everyone, but in practice, they shy away from anything that could compromise their primary pursuit of a career and academic prestige.
As a result, they mostly respond to efforts to reach a lay audience by distancing and criticizing. They are really harsh on the compromises inherent in meeting lay audiences where they are.
That's a pretty weak take. The difference between philosophy texts on ethics and the better self-help texts are just the difference between pulp fiction and classic novels. Time needs to pass before anybody is willing to go "actually, this is worth analyzing". That said, there's a lot of self-help that isn't philosophical (or, more exactly, don't attempt to defend the philosophy that they present the conclusions of).
Consider the difference between. "Thou shalt not kill, thou shalt not commit adultry" and "you shouldn't kill or sleep with your neighbor's wife because both actions cause more harm than they provide benefit, which ought be our goal because the conclusions of such a cost/benefit analysis closely align to most people's natural sense of right and wrong". The former is a statement of morals. If you include the "...because God said so, and God is always right", then it becomes an ethical argument, like the second. The key is arguing the why down to axioms, and defending those axioms as superior to other axioms.
A self-help book like "How to win friends and influence people" provides rules to follow, to achieve a desired outcome, and attempts to explain why the rules work. It doesn't spend much, if any (it's been a while) energy arguing why you should want the desired outcome, or if the desired outcome is actually a good thing.
> Time needs to pass before anybody is willing to go "actually, this is worth analyzing".
I think that's exactly the problem: the assumption that philosophers should assume, by default, that self-help is unworthy of their time, and only pay attention to the rare cases that happen to have philosophical merit.
They could take a more active interest to questions such as, how can philosophy improve self-help literature? What kinds of ideas should ordinary people with low to average education consume? The wide array of values, goals, and philosophical approaches would make it a contentious and lively conversation.
But philosophers tend to vacate the field and leave it to mercenaries, culture warriors, and amateurs. When they do speak about it, it tends to be in symposiums or on podcasts aimed at college-educated people with a special interest in philosophy. That's as far down as they're willing to dumb it.
Philosophers don't "vacate the field". Many, maybe even most ethics texts are directly applicable to one's life. It comes with the territory of a field based around asking "What ought one do?".
They do tend to enjoy less market success than the less rigorous slop, but that's a symptom of a much broader problem in the world: Someone dedicated to doing something well is at a disadvantage versus someone dedicated to winning. It's the whole "anyone who is capable of getting themselves made President should on no account be allowed to do the job" Douglas Adams quote, it's why it's still not the year of Linux desktop despite having offered the superior OS for years, it's why IKEA has practically killed the market for quality furniture, and it's why damn near every corporation you can name is lead by some ghoulish psychopath. In most competitions, you can simply get a lot more mileage out of optimizing for the competition than you can squeeze out of the underlying skill. So the dude optimizing for selling books is going to knock the socks off the one trying to rigorously convey a robust ethical framework.
If you can fix that basic flaw in society, I think we should probably start with the more pressing matters than who's selling more self-help books.
To me that sounds like philosophers not being willing to lower themselves to meet people where they are.
There are plenty of professionals who don't let arbitrary standards of rigor get in the way of communicating with people. For medicine, there's an entire subspecialty of public health professionals who specialize in crafting communication for broad audiences. They don't target only the people who are capable of processing communications of a certain rigor, and they don't retire their specialty because advertisers will always have the upper hand.
Not to mention that many fields are taught as school subjects, so they have to be presented to literal children. Of course the school curricula of history, literature, and science are taught with naivete and lacunae that would be travesties if judged by professional standards, but historians aren't calling for teachers to stop teaching a dumbed down version history to children. They accept the necessity of it and debate how best to do it.
That seems like a rather cynical take. I think you’re conflating philosophy as guidance for how to live (stoicism etc) and philosophy as more of a science to explore unanswered questions, which are naturally going to have very different practitioners and audiences?
The latter can be applicable to the former. Traditionally the connection was acknowledged, with Socrates the prototype of the philosopher who believed that happiness, ethical living, and philosophy were inextricably linked. Obviously philosophy has come a long way since Socrates, but academic philosophers continue to give lip service to the idea that philosophy can be valuable in everyday living, if not in ethics then in processing information, critiquing arguments, and understanding the origins and limitations of ideas.
I think we've known since the time of Socrates that the practice of philosophy is not the practice of happy living. Philosophers tend to be miserable. Socrates himself chose to drink poison over moving to a different city. I think most philosophies, despite their myriad differences, agree that what people tend to want is not what philosophy will give them. Maybe some of the answers philosophy yields can be applied to increase happiness, but philosophy in practice tends to produce questions.
Most philosophers would not agree that yielding questions instead of answers makes philosophy unhelpful, nor that the happiest life is necessarily the one in which pain is most successfully avoided.
I've received questions like this from very good, very reasonable, very technically carefully managers. What happens is, Mike complains and tries to throw you under the bus, and the manager reaches out to hear your side of it. You tell them Mike is trying to ship code with a bunch of issues and no tests, and they go back to Mike and tell him that he's the problem and he needs to meet the technical standards enforced by the rest of the team.
Just because management asks doesn't mean they're siding with Mike.
I have been on both - actually on all three - sorry, make that four - sides.
1. I tried to ship crap and complained to my manager for being blocked. I was young, dumb, in a bad place and generally an asshole.
2. I was the manager being told that some unreasonable idiot from X blocked their progress. I was the unreasonable manager demanding my people to be unblocked. I was without context, had a very bad prior relationship with the other party and an asshole - because no prior bad faith acts were actually behind the block - it was shitty code.
3. I was the manager being asked to help with unblocking. I asked to understand the issue and to actually try to - based on the facts - find a way towards a solution. My report had to refactor.
4. I was the one being asked. Luckily I had prior experience and did this time manage to not become the asshole.
I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.
> I don’t blame Mike, I blame the system that forced him to do this.
Bending over backwards not to be the meanie is pointless. You're trying to stop him because the system doesn't really reward this kind of behavior, and you'll do Mike a favor if you help him understand that.
> Bending over backwards not to be the meanie is pointless.
This thinking that we must avoid blaming individuals for their own actions and instead divert all blame to an abstract system is getting a little out of control. The blameless post-mortem culture was a welcome change from toxic companies who were scapegoating hapless engineers for every little event, but it's starting to seem like the pendulum has swung too far the other way. Now I keep running into situations where one person's personal, intentional choices are clearly at the root of a situation but everyone is doing logical backflips to try to blame a "system" instead of acknowledging the obvious.
This can get really toxic when teams start doing the whole blameless dance during every conversation, but managers are silently moving to PIP or lay off the person who everyone knows is to blame for repeated problems. In my opinion, it's better to come out and be honest about what's happening than to do the blameless performance in public for feel-good points.
> I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.
So now instead of reviewing 1600 lines of bad code every 2 weeks, you must review 1600 lines of bad code every day (while being told 1600 lines of bad code every day is an improvement because just how much more bad code he's "efficiently" producing! Scale and volume is the change.
The more often it happens, the more practice you get at delivering the bad news, and the quicker Mike learns to live up to the team's technical standards?
I think people can be in hard conditions, needing a job, under pressure, burnt out and feel like this is their only way to keep their job. At least that's how it felt with Mike.
At the end, I spent a lot of time sitting down with Mike to explain this kinds of things, but I wasn't effective.
Also, now LLMs empower Mike to make a 1600 line PR daily, and me needing to distinguish between "lazyslopped" PRs or actual PRs.
It sounds like you're saying the only thing ugly about tagging is when it contains objectionable political content. That's not really responding to the complaint here, which is that the vast majority of it is low effort, low quality tagging that makes things aesthetically uglier. It's easy to go out with a collector's eye, cherry-pick the good stuff, and put together a slideshow that makes it look like a public amenity, but that ignores the overall effect of wall after building after block of proof of Sturgeon's Law.
Is it ignorable? Does all the terrible stuff just disappear into the background, or should we care about how it affects the experiences of people who have to live with it and walk past it every day? I think that's the question people are arguing.
In some flavors, Protestantism is quite focused on self-scrutiny and skepticism about human nature, making people suspicious and even actively hostile towards supposed heroes.
Other flavors of Protestantism seem to have completely lost that, though. Evangelical Protestantism somehow inculcates a need for leaders to love and worship and an ability to completely suspend rational judgment about them. Their relationship to charismatic pastors and other leaders is a mystical, ecstatic experience that they have an unlimited appetite for. No matter how many times their leaders are shown to be flawed, and in many cases quite detestable and corrupt human beings, they eagerly look for the next leader to worship.
Two stereotypes that illustrate the extremes of this massive cultural difference in Protestantism are the rich WASPs of the northeast and the poor Southern Baptists of the deep South.
WASPs know that heroes are myths, and are unsurprised when the real people turn out to be real pieces of work. Southern Baptists kind of know this on some level -- I think they're actually a bit attracted when a man has a whiff of charlatanism about him, because it shows he knows what they want -- but when they choose their hero, they give themselves over to complete and sincere belief in him.
> the ways in which I myself am dysfunctional: specifically, my addiction to being useful. (Of course, it helps that my working conditions are overall much better than Akaky’s). I’m kind of like a working dog, in a way. Working dogs get rewarded with treats4, but they don’t do it for the treats. They do it for the work itself, which is inherently satisfying
I haven't been able to find a source for this, but I remember reading that Marx believed that doing productive work for the benefit of human beings was part of the "species essence" of humans. Needless to say, he did not approve of how this tendency was expressed under capitalism. He said that working for compensation alienates people from their work, prevents them from fulfilling their species essence, and therefore prevents them from being fully actualized human beings.
If you're working for the satisfaction of being useful to others, that's not dysfunction. That's you beating the odds and having a healthy relationship to your work despite the external social pressure to make it about the money. I think there's no irony in the fact that you have better working conditions; in fact, it makes perfect sense: you are privileged and insulated from the harshest pressures of capitalism that force people to think only about the financial benefit to themselves and not the benefit they provide to other people.
I'm not on Twitter, but I know a lot of the content I see comes indirectly from Twitter. For example, for soccer news, I follow a number of fans and journalists on BlueSky, but they follow journalists, agents, and official team and league accounts on Twitter, as well as players and players' wives and girlfriends on Instagram. As much as I'd love for them to die and won't touch them myself, it's clear that a lot of the information I get originates on X and Meta platforms.
Having different sites dedicated to different kinds of topics helps me partition my time and energy. Otherwise it would all go to the most emotionally salient content. I get that your current choice is to go all-in on the current political crisis in the U.S., but that isn't desirable or healthy for everybody. There are plenty of sites that can help you with that, and people who aren't well-served by that also need places to go. I use HN for professionally relevant content and tech-related diversions. I'll check in on politics elsewhere.
I don't think online engagement correlates much with political effectiveness, btw. Not that I think I'm more politically effective than the average person, but I know that being more engaged online doesn't make me more so.
> Can people keep a good mental model of the repo without writing code?
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
Reviewing someone else's PR, who used Copilot but barely knows the language, has been a mixture of admiration that AI can create such a detailed solution relatively quickly, and frustration with the excess complexity, unused code, etc.
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
reply