Hacker Newsnew | past | comments | ask | show | jobs | submit | compass_copium's commentslogin

And worst of all, the momey you pay isn't tied to your license plate. If you overpay, someone else can park for free!!

>Will increase your Linux skills because diversity always helps the human brain

Is this still true, given how much runs through systemd now? I thought about trying out FreeBSD last time I got a new computer, but decided on sticking with Debian to help skill building on other Linux systems


Diversity of programming languages, operating systems, cultures, human languages, countries, music etc. always gives a fresh perspective I've found. You may go back to what you prefer at the end but it gives you learnings that are at a "higher level" :-)

> Is this still true, given how much runs through systemd now?

Yes, still true. On FreeBSD you will realize what complexity systemd might be hiding from you and what additional features it provides. BTW I don't actually like rc init on FreeBSD that much ! I feel that rc.d can learn a lot from more modern init systems like systemd, dinit etc. I don't like reading highly complex rc scripts !!


But the programming language has explicitly laid out rules. It was not trained on those sets of rules, but it was trained on many trillions of lines of code. It has a map of how programs work, and an explanation of this new language. It's using training data and data it's fed to generate that result.


What doesn't that explain tho?

What behavior would you need to see for that explanation to no longer hold? Because it seems like it explains too much.


I don't know how you'd prompt this, but if there was a clean example of an A.I. coming up with an idea that's completely novel in more than details, it would be compelling evidence that these next-token predictors have some weird emergent properties that don't necessarily follow from intricate, sophisticated webs of token-prediction.

E.g. "What might be a room-temperature superconductor" -> "some plausible iteration on existing high-temperature superconductors based on our current understanding of the underlying physics" would not be outside how we currently understand them.

"What might be a room-temperature superconductor?" -> "some completely outlandish material that nobody has studied before and, when examined, seems to have higher temperature superconducting than we would predict" would provoke some serious questions.

A fun experiment I've heard suggested is training a model on all scientific understanding just up to some counterintuitive quantum leap in scientific understanding, say, Einstein's theory of relativity, and then seeing if you can prompt it to "discover" or "invent" said leap, without explicitly telling it what to look for. This would of course be pretty hard to prove, but if you could get it to work on a local model, publish the training set and parameters so that anyone can replicate it on their own machine, that could be pretty darn compelling.


Why would it matter whether or not the robot looks something up if it makes a novel discovery?

Why would it matter that the discovery wasn't just novel but felt like an unconventional one to me, someone who is probably a total outsider to that field?

Both of those feel subjective or at least hard to sustain.

Look. What I'm trying to tell people is that the easy explanations for how these models worked circa GPT-2 is just not cutting it anymore. Neither is setting some subjective and needlessly high bar for...what exactly? What? Do we decide to pay attention to AI after it does all the above? That seems a bit late to the party for cheering on or resisting it.

Some new shit is afoot. Folk need to pay attention, not think they got it figured out already.


Programs are fundamentally lists of instructions. LLMs are very good at building these lists. That it performs well when you say "Build a list you've seen before, but do it in a slightly different way this time. Here's the exact way I want you to do it." is not surprising. I would honestly be surprised if it couldn't do it.

As the other commenter suggested, a genuinely novel scientific idea would be surprising. A new style of art (think Picasso or Pollack coming along), not just an iteration on Ghibli, would be surprising. That's actual creativity.


>I would honestly be surprised if it couldn't do it.

You'd be surprised if an LLM couldn't write *any* program?


That’s still over-general to the point of being useless.

What you wrote would apply to a human approaching this task as well, sans the “many trillion lines of code”.


Ah, just time the collapse perfectly. Wish I'd thought of that ;)


Timing it "perfectly" is impossible unless you're psychic or very lucky.

The good news is you don't have to be perfect. You can be late and still make money. The important thing is to be prepared and ready to pounce.

When AI blows, it's going to take the whole stock market down with it.


A missile will always be cheaper than a missile interceptor, and the interceptor will never be a 1:1 kill. Building a missile interceptor system ia a good way to get your strategic opponent to build a bigger stockpile.


Disagree on always being cheaper. Military planners are obsessed with the best weapons, such interceptors are pricey. But look at Israel: Iron Dome. ~$50k/shot. They deliberately built a dumb SAM because it was designed to go against dumb opponents--objects falling freely on a ballistic trajectory. While they are usually facing light stuff that isn't even worth that they have successfully engaged longer range stuff that costs many times what the interceptor costs.

Overall, though, the offense always wins this one because interceptors can only protect a limited area whereas missiles can go anywhere.


Iron Dome is a great example of my point. It is a $50k interceptor designed to take out a propane tank with a rocket strapped to it, not a real ballistic missile like a Scud.

Patriot missiles ($7MM) take out Scuds ($3MM).


I care a lot more about my life (or my car's catalytic converter, which was stolen off my car in my work parking lot before they inatalled a gate for the lot) than any of my work-related IT credentials. Health and safety threats are a much bigger deal to people than nebulous, difficult to exploit threats to IP.


Except the turnstiles and swipe cards do almost nothing against an active shooter situation.

But missing in this discussion is a risk and consequence analysis. If the risk is armed attackers, do something that targets that. For physical theft, target that. Likewise IT risks. The core problem is that risks were not being identified (systematically or in response to expert feedback) and prioritised.

Incidentally, the solution to car park access is ALPRs, and the solution to most of the physical security is solid core doors at the workgroup level with EACS swipe and surveillance cameras there, and at the front desk have face level 4k video surveillance. With an on duty guard to resolve issues with access.


> The core problem is that risks were not being identified (systematically or in response to expert feedback) and prioritised.

Or the person who wrote the article just wasn't involved in that loop, or otherwise disagreed on what threat models mattered.


It seems much more a compliance and auditing goal. To meet some objective of knowing who is in the office at what time, which informs office space leasing decisions, return to office mandates, decisions of charging for staff parking, etc. Personnel protection seems almost an afterthought.

Protecting JIRA auth tokens is quite likely low down the list for IT security. Making sure your workers are not remote North Koreans is indeed a security benefit of secured physical offices with regular on-site work.

But the author did have a deeper point -- visible security theatre gets lots of money and management attention, while meaningful expert driven changes are mired in bureaucracy.


I still challenge whether his proposal was actually "meaningful, expert driven changes" - is this actually a serious threat vector? How would you actually exploit it, without having access to dozens of other vectors? Can you even meaningfully resolve that vulnerability when you have people walking in off the streets due to a lack of physical security?


I also like AntennaPod for audiobooks--fewer apps that way.


...am I wrong in thinking that 1(a) is the relevant section here, and shows a lot of red?


I honestly don't see the point of the red data points. By now all the erdos problems have been attempted by AIs--so every unsolved one can be a red data point.


The post's author points that out as well


I have to use Windows at work and I will never have weird cloud authentication issues because I'm required to use a work-provided MS account on the computer. The author says he's a Windows guy, and always will be. This article, and these types of complaints, are really only relevant if you're using it on your personal PC.


TBF that's a fairly contrarian view from Sheldon. I started favoring the front brake after reading his writing on it, and do find it useful. I do find it easier to go down in the ice that way though, so be careful!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: