Hi all, it seems I discovered a previously unknown relationship between the Collatz conjecture and the (signed) Fibonacci numbers. I would be super grateful for any feedback. Thank you!
This sounds like one of these things that are trivially true and yet super useful if you embrace it. Do you have any more reading material about this idea?
I asked ChatGPT to rewrite the original post using your glossary, which worked well:
I've set up my system to use several AI models: the open-source Mixtral-8x7, Dolphin (an uncensored version of Mixtral), GPT-3.5 Turbo (a cost-effective option from OpenAI), and the latest GPT-4 Turbo from OpenAI. I can easily compare their performances in Emacs. Lately, I've noticed that GPT-4 Turbo is starting to outperform Mixtral-8x7, which wasn't the case until recently. However, I'm still waiting for access to Mistral-Medium, a new, more exclusive AI model by Mistral AI.
I just found out that Perplexity, a new search engine competing with Google, is offering free access to Mistral Medium through their partnership. This makes me question Sam Altman, the CEO of OpenAI, and his claims about their technology. Mistral Medium seems superior to GPT-4 Turbo, and if it were expensive to run, Perplexity wouldn't be giving it away.
I'm guessing that Mistral AI could become the next Renaissance Technologies (a hedge fund known for its innovative strategies) of the AI world. Techniques like Direct Preference Optimization, which improves smaller models, along with other advancements like the Alibi Python library for understanding AI models, sliding windows for longer text sequences, and combining multiple models, are now well understood. The real opportunity lies in quickly adapting these new technologies before they become mainstream and affordable.
Big companies are cautious about adopting these new structures, remembering their dependence on Microsoft in the past. They're willing to experiment with AI until it becomes both affordable and easy to use in-house.
It's sad to see the old technology go, but exciting to see the new advancements take its place.
The GP did a great job summarizing the original post and defining a lot of cryptic jargon that I didn't anticipate would generate so much conversation, and I'd wager did it without a blind LLM shot (though these days even that is possible). I endorse that summary without reservation.
And the above is substantially what I said, and undoubtedly would find a better reception with a larger audience.
I'm troubled though, because I already sanitize what I write and say by passing it through a GPT-style "alignment" filter in almost every interaction precisely because I know my authentic self is brash/abrasive/neuro-atypical/etc. and it's more advantageous to talk like ChatGPT than to talk like Ben. Hacker News is one of a few places real or digital where I just talk like Ben.
Maybe I'm an outlier in how different I am and it'll just be me that is sad to start talking like GPT, and maybe the net change in society will just be a little drift towards brighter and more diplomatic.
But either way it's kind of a drag: either passing me and people like me through a filter is net positive, which would suck but I guess I'd get on board, or it actually edits out contrarian originality in toto, in which case the world goes all Huxley really fast.
Door #3 where we net people out on accomplishment and optics with a strong tilt towards accomplishment doesn't seem to be on the menu.
I would have said there is no problem with your style (nothing brash/abrasive), but you used a lot of jargon, that people who are not very deep into LLMs (large language models) would not understand. Interests of hackernews visitors are very diverse, not everyone follows LLMs that closely.
This was my take exactly. I read the original and thought, "Wow, this sounds like really interesting stuff this poster us excited about. I wish I knew what the terms meant, though. I'll have to come back to this when I have more time and look up the terms."
I was pleasantly surprised to find a glossary immediately following, which tells me it wasn't the tone of the post, but the shorthand terminology that was unfamiliar to me that was my issue.
I think writing in "Ben's voice" is great. There are just going to be times when your audience needs a bit more context around your terminology, that's all.
I think the only thing you really need to do is unpack your jargon so people who aren't exactly you can understand what you're saying. Even on this site, there are folks with all sorts of different experiences and cultural context, so shortcuts in phrasing don't always come across clearly.
For example, "in which case the world goes all Huxley really fast." "Huxley" apparently means something to you. Would it mean anything at all to someone who hasn't read any Aldous Huxley? As someone who _has_, I still had to think about it -- a lot. I assumed you're referring to a work of his literature rather than something he actually believed, as Huxley's beliefs about the world certainly had a place for the contrarian and the original.
Further, I assume you are referring to his most well-known work, _Brave New World_, rather than (for example) _Island_, so you're not saying that people would be eating a lot of psychedelic mushrooms and living together in tolerant peace and love.
I don't at all think you need to sound like GPT to be a successful communicator, but you will be more successful the more you consider your audience and avoid constructions that they're unlikely to be able to understand without research.
People aren’t passing you through a filter because you are brash and undiplomatic and “unaligned”, it’s because your communication style is borderline incomprehensible.
I used to struggle a lot in communication for talking to people in the authentic self way you just described. Being too direct and telling my point of view in such a way has caused tension with family, colleagues and the girlfriend.
The moment I change the way I talk and say instead of "That's bullshit, let's move away from it" to "That could be a challenging and rewarding experience", and I can already see the advantage.
I rather like to talk the way I want, but I see it as challenging and not that rewarding as people seem to get more sensitive. That made me wonder if the way GPT-style chatbots communicate with humans would make humans expect the same way of communication from other humans.
Porque no los dos? While I truly appreciate your OP and could grok it even though I don't know the tech, the summary and rewrites saved me a ton of googling. I hope one day we have a 'see vernacular/original' button for all thought and communication so people can choose what level to engage in without the author having to change their communication style. Translation for personal dialects, so to say.
Real Ben >> GPT Ben. However if you are going to the wider world you problem need to self varnish a lot (i know i would have to). You are fine in here!
What you are alluding to is quite similar to the that “instagram face” that everyone pursues and self filters for except its more about your communication and thoughts. Also the argument that you need to reach a wider audience i dint think isn't necessary unless you want the wider audience to comment and engage.
The internet is the great homogenizer soon(ish) we will be uniform.
I think this is just in the short-term. In the long term GPTs will retain our personality while making the message more understandable - which I think is the most important thing. Although McLuhan would disagree. Benefits, though, might include AI making cross-cultural translation so you can converse with someone with a different language and very different experiences and still understand each other. I think that's good? Maybe?
Your posts are my favorite thing about Hacker News, both because of the things you're saying and the way you're saying them; please don't let anyone tell you otherwise.
Thank you! Amazing how difficult it is to keep up with all of the new jargon given how fast it's evolved. I had no idea that mistral-medium was so great.