Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're looking for decision theory. http://en.wikipedia.org/wiki/Decision_theory


What I'm interested in though isn't what makes a decision by a person ethical or rational or whatever, but rather what makes a social system itself (or an algorithm that interfaces with people) ethical or unethical. E.g. looking at the ethics of the design the social system behind the Zimbardo prison experiments, rather than looking at the decisions of the individual actors once they're already in that environment.

I am somewhat interested in questions like whether or not it's morally acceptable to lie to a system or a (non-conscious) robot. At least on the surface this doesn't violate any of Kant's moral objections to deception. That is, Kant said (more or less) that the reason why lying is unethical is that it deprives the other person of their ability to make a rational decision, and our rationality is part of what makes us human, so lying is unethical because it deprives others of their humanity. So e.g. is it unethical to lie to about your age on a porn site? It might be, but if it is then we'd an entirely new framework to justify that sort of thing.

It almost seems like based on the power imbalances between people and algorithms, and the traditional justification against lying, it would almost be morally virtuous to teach your children to lie, as long as they're lying to a machine and not a person. (Of course a machine could be composed of people who are following an algorithm when interacting with you, but they still aren't making any sort of autonomous decisions for themselves.)

Anyway this type of problem is a little different than the problem of what makes the systemic design of a social system ethical or unethical, but they're both just different sides of the same coin.


you can evaluate the decision function of groups of people. coming up with ways to morally justify defrauding or committing violence against groups of people by abstracting away their humanity has been a favorite pastime for millennia.


I don't keep up with sociology/ethics literature but I haven't heard your original thesis before and its an interesting discussion.

I'm sure there were otherwise nice people that were Nazi soldiers. And there are probably a few nasty people who work for Google to create products millions of people love.

As social systems, as you call them, grow larger and more connected it may make the ethical positions of systems more important. The problem is that most corporations are inherently amoral, and governments almost always think they are the good guys.


I've always thought that systems were inherently evil. By goerdel's theorem, you won't be able to exhaustive list enough rules to make the system perfect. Therefore, we always must rely on individuals making human moral judgements. (This is a way simplified personal philosophy that I've never seen anyone articulate well. Maybe I'll have to write a treatise on of someday.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: