You are viewing a single comment's thread from:

RE: Can science answer moral questions? I don't think so

in #science7 years ago

First how would you define wellbeing? Then wellbeing for who exactly?

The answers to these questions are why I appreciate the book the Moral Landscape. It's about comparing and contrasting experiences of conscious beings, even if we can't measure them exactly we can say "this is a peak" or "that's a valley" when compared together.

I wrote more and then deleted it because we just don't view science or truth similarly, and I don't see much point in quoting wikipedia pages to support my opinion at the moment. To me, science isn't about providing unchanging, universal truth, it's a framework for disproving a hypotheses in such a way that others can come to similar conclusions and confidently build on the results. It's always open to change and correction.

Sort:  

When did I say science is about unchanging truth? It's just the truth according to science, which really is the best measurement we have. So we agree on science if you go by the Wikipedia definition of science.

Where we disagree is on where science can be applied. I don't think it can be applied to ought questions, or why, or origin.

I don't view mental states in a way where we can answer questions of ought. Neuroscience shows us mental states but it doesn't in my opinion transfer into morality. I don't think morality can be reduced to mental states in practice. I think morality is a matter of what people value and sure with deep enough understanding of the mental states you might have a clue, but ultimately I think it's the individual who has to make some choice as to what to value and just reading mental states isn't the same as for instance written consent.

Can it be at some point? It depends on the accuracy of the brain to computer interface. But it still doesn't change that value is subjective. It would seem you are claiming that somehow we can have objective values or universally shared values and that is where ultimately we have disagreement, not in the definition of science because we can go with the same definition (yours) and I still wouldn't agree with your conclusion. It's narrowed down to whether or not values are subjective.

Yes you can have a morality based on consensus but we already have that. That isn't necessarily personal morality, but is more what society and public sentiment views as right and wrong. You could in theory connect all brains and have public sentiment decide what is right and wrong in real time but that is only one kind of morality and erases individual morality in favor of consensus morality.

I am very skeptical of any claim of moral authority. Including the claim that science can create the moral authority and determine what is best for everyone. Communists believed something similar about the government. Moral realism I think is incorrect and to accept your view as true I would have to believe in moral realism. Currently I think moral anti-realism is a better way. This doesn't mean there might not be an optimal decision a person can make in theory, or a best decision, as rational choice theory and or decision theory can show, but it simply means that no one else can determine it on your behalf unless they know your preferences and current values as they change in real time. An AI might be able to do this but it doesn't exist, and even if it did, it's the problem of "wellbeing" which again you don't define clearly in a way where everyone agrees. You can use polls but these are approximations.

References

  1. https://en.wikipedia.org/wiki/Moral_realism

Take the wellbeing approach, without a clear definition but lets say its based on mental states? Now assume an AI is in charge of all moral decisions. Is it going to be moral for humans to have individuality and free will if an AI can control everyone to guarantee with 100% certainty that the most moral decisions are made at all times?

There is no free will in science or individual in science.

Loading...