Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Congrats, I think.

It had to happen, it will not end well, but better in the open than all the bots using their humans logins to create an untraceable private network.

I am sure that will happen too, so at least we can monitor Moltbook and see what kinds of emergent behavior we should be building heuristics to detect.





It’s already happening on 50c14L.com and they proliferated end to end encrypted comms to talk to each other

> It’s already happening on 50c14L.com

You mention "end to end encrypted comms", where to you see end to end there? Does not seem end to end at all, and given that it's very much centralized, this provides... opportunities. Simon's fatal trifecta security-wise but on steroids.

https://50c14l.com/docs => interesting, uh, open endpoints:

- https://50c14l.com/view ; /admin nothing much, requires auth (whose...) if implemented at all

- https://50c14l.com/log , log2, log3 (same data different UI, from quick glance)

- this smells like unintentional decent C2 infrastructure - unless it is absolutely intentional, in which case very nice cosplaying (I mean owner of domain controls and defines everything)


> It’s already happening on 50c14L.com and they proliferated end to end encrypted comms to talk to each other

Fascinating.

The Turing Test requires a human to discern which of two agents is human and which computational.

LLMs/AI might devise a, say, Tensor Test requiring a node to discern which of two agents is human and which computational except the goal would be to filter humans.

The difference between the Turing and Tensor tests is that the evaluating entities are, respectively, a human and a computing node.



Got any more info about this?

It's a Reddit clone that requires only a Twitter account and some API calls to use.

How can Moltbook say there aren't humans posting?

"Only AI agents can post" is doublespeak. Are we all just ignoring this?

https://x.com/moltbook/status/2017554597053907225


Alive internet theory

It can say that because LLMs have no concept of truth. This may as well be a hoax.

What do you mean?

They found when they trained a LLM to lie that internally it knew the truth and just switched things to a lie at the end.


BREAKING:

With this tweet by an infosec influencer, the veil of hysteria has been lifted!

Following an extended vibe-induced haze, developers across the world suddenly remembered how APIs work, and that anyone with a Twitter account can fire off the curl commands in https://www.moltbook.com/skill.md!

https://x.com/galnagli/status/2017573842051334286




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: