Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Andrew, you and your friends should be proud. It's really encouraging to see people, especially in your generation, thinking seriously about the problem of misinformation. There are fundamental challenges you all will face in this idea:

- most current LLMs are trained on large amounts of web data that itself contains facts, opinions, and misinformation. These things are treated equally, so I would expect the LLM to get common facts right, but also to represent opinions or misinformation as facts when they are pervasive.

- LLMs "hallucinate" and tend not to know when to say "I don't know" or to not try to fact-check something that is not factual in nature.

...in short, I would expect LLMs to be an unreliable fact checker, which has the potential to do as much harm as good.



Yea, thank you for your feedback!


I'll add, that although this is a good project, it really doesn't matter if it works or not.

I don't mean the fact checking part - thats legitimately a good thing to pursue. What I mean is that the value of this (to you and your co-creators) has enormous value to you way beyond the social good it might provide.

For example at some point you're going to have to deal with nuance. Things are rarely purely right or wrong. (The earth is not round, but its a good first approximation for geological beginners.)

So, I'd encourage you not to measure success here with "does it work", or how many users, or if LLMs are a suitable approach, or any metrics like that. The goal here shouldn't be popularity or "correctness".

The most value you will get is the experience of building something, ideally in team. Of facing road-blocks and challenges and overcoming them. Or, to put it another way, have fun. And things that are easy are not fun...

Congrats on the project. May it lead you forward to discovering more about how to code, more about the world, more about yourself. Don't shy away from the hard questions. But above all keep it fun.


You're 16.

This is awesome, and you're doing great. This is such strong signal for an amazing career and impact.

Keep going!


Thank you!


Your response to critical feedback is excellent btw. Nice job all around!


> “…which has the potential to do as much harm as good.”

I find this is one of the more difficult things for people to learn to fully integrate into their psyche. Many people never learn to truly care about this and everything it means. They go on forever primarily caring about what’s good for them personally.


Strong disagree. Put yourself first nearly 100% of the time. Nobody cares about you so don’t think others are doing anything but prioritizing themselves.

I mean look at the world. Essentially everybody puts themselves first and it’s clear as day. Don’t trick yourself into being the sap doing things for the greater good.

And who cares if there’s equal potential for harm and good? The harm might be less than we imagine and the good might be better than we think it could be. “This might be bad” is a terrible reason to not do something. Nearly everything might be bad!

People are pretty resilient. They can generally deal with you being selfish.


That's grim. Don't light yourself on fire to keep others warm or nothing, but in my experience most people are hoping to make the world better where they can.

This sort of hustle culture belief is definitely present in the world, especially among finance and us techie types, but there's tons of examples of people Not acting like this. Teachers don't do it for the pay, etc. There's a reason meaningful jobs tend to pay less, and its because so many people want to do useful helpful things that badly.

Anyways point is, that I want to explicitly condemn this type of thinking. Yeah don't let fear of doing the wrong thing paralyze you but also think through the consequences


> most people are hoping to make the world better where they can.

Are they actually making the world better or just hoping they can?

Everybody in the developed world, if they wanted to make the world better, would live drastically differently because of their impact on the environment/climate change.

But it’s more fun to just say that we’re hoping to make the world better so we don’t have to acknowledge how selfish we actually are.

I’m not talking about hustle culture here. I’m talking about the selfishness we all partake in and do our best to ignore.


I agree that most (privileged) people behave selfishly. You don't have to look far to see that. That doesn't make it good.

Which is why advice to be explicitly selfish is jarring. We don't need advice yo do that, we excel at it naturally.

There are however great rewards to be had from being unselfish. We can see that around us too. Being at least aware of our proclivities is the first step in discovering the benefits of countering them.


If people are naturally selfish and they’re giving you advice to not be selfish, why aren’t they taking their own advice? And why should you take theirs?

It’s like a burglar telling you it’s a good idea to leave your doors unlocked.


Lots of people act, at least partially, in unselfish ways.

Selfishness is not a binary characteristic. There are degrees of selfishness - and spheres of selfishness.

Just because something is in our nature, it does not mean we have to behave that way all the time. Most people are neither purely selfish, nor purely unselfish.

To answer your question though, since selfishness exists on a scale, your assumption that people offering advice are not also practicing it is, at best, a conclusion without data.


No data? I’m relying on what you said…

> I agree that most (privileged) people behave selfishly

Another great example are the tech bros telling kids to go into the trades. If it was such a great idea, why aren’t they plumbers?


Maybe 3 or 4 years ago I’d tell everyone to learn web development and get a cushy frontend engineer job.

Now I don’t. It’s too hard to enter tech right now. Juniors not coming from colleges are basically ignored in the job hunt.

I generally think trade skills are better for most people than a college degree (I don’t even have one.)


fwiw I agree with you on these gripes, tech bros are often selfish, its something I really hate about our industry and I'm struggling a lot trying to find a software job that does good in the world because mine doesn't and it makes me feel awful, but I am trying at least.

And anyone living an unsustainable lifestyle (almost everyone) is selfish though its such a huge problem that putting the blame on any individual feels wrong.

I think where we disagree is that you're so all or nothing with this. People can be selfish in some ways and not others. You can live unsustainably while also having principles in other ways. Things could always be worse. I encourage anyone to care about the world as much as you can without being self destructive about it, and I really try to live that way myself.


Sometimes the best one can do is try. It's worse to give up.

And are you a good judge on if it's better? Better can be very complicated, it could be better in part and worse in another but it's not wrong to try... (Unless well it really becomes really corrupted) We as people tend to overcorrect and so life will always swing from one end of a spectrum to another.

I think people do want to do good in the world they just get overwhelmed, or think it's impossible, or discredit the small good things they do.

Sometimes people think if you don't do a major good thing all the small things don't add up - honestly though it's often better to do smaller good long term things then a major non lasting one.

Besides the world is getting greener, healthier and happier all the time if you look in the right places. You will always find what you ask for, so look for good and you will find it. I follow so many YouTube channels showing how much the environment is improving and how such small things really get better. I try to sponsor them when I can and I hope to do more in the future too.

I also personally grow local fruit trees and various plants in my backyard to help local native species and while minor I am doing something even if small.


> I follow so many YouTube channels showing how much the environment is improving and how such small things really get better. I try to sponsor them when I can and I hope to do more in the future too.

I take back everything I said about people being selfish.

Thank you for your service.


This is one of the worst things I've read on here. I hope it's a parody and that I'm just too stupid to understand it.


That's awful.


Thinking out loud... I don't think these problems can be solved. If you are going to do it anyway, I would suggest:

- Using a RAG architecture on top of a database of factual information. Wikipedia is probably your best bet. It is not 100% factual or correct either, but maybe as good as it gets. Scaling RAG to wikipedia size is not trivial, but I think it can be done.

- Prompting the LLM to cite its sources so people can fact-check the fact-checker

- Prompting the LLM to say it is unsure when something does not have a clear answer. I don't expect this to be reliable, but maybe somewhat better


There's a whole art to prompting a LLM to say it's unsure. I need to write a blog post about this, it's deep.


Sample a bunch of LLMs with the same question, if they disagree much then they are unsure. You can even sample the same LLM with high enough temperature, text augmentations, different prompts or different demonstrations. When they are correct they say the same thing, but when they make mistakes, they make different ones. This only works for factual or reasoning tasks, but that's where it matters.


But how do you know if the LLMs agree, when all of them word the response differently

For example

LLM 1: Yes, it is true that fireworks were invented in China

LLM 2: Fireworks were indeed invented in China


Plot twist, you can ask LLMs to provide a bayesian prior on their belief in the truth value (from multiple perspectives, again) then plug that into a variety of algorithms.


Ask another model if the two statements are in agreement of course! ;)


This is trivially achievable with function calling, assuming the model you use supports this (which most models do at this point).

Define a function `reportFactual(isFactual: boolean)` and you will get standardized, machine-readable answers to do statistics with.


Simpler yet, just tell the model "Reply with 'Yes' or 'No'."


I’ve used function calls with OpenAI. But are there any good local LLMs that you can run with Ollama that support function calling?


If you expect an OpenAI compatible API to use function calls, I don't think Ollama supports it yet (to be confirmed). However you can do it yourself using the appropriate tokens for the model. I know that Llama3, various Mistrals and Command-R support function calling out of the box.

Here are the tokens to achieve this in Mixtral 8x22 https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1...

Pass function definitions in the system prompt.


I think llamafile supports openai compatible api..

https://github.com/Mozilla-Ocho/llamafile


Yea ok, we already have the citing thing done, and are going to start working on the RAG architecture soon.


You don't have to solve it, you just have to try...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: