1 A 3 O R N

An Expert Advice Value Problem

Created: 2021-07-12
Wordcount: 1.2k

Epistemic Status: Probably well known, wanted to articulate problem for myself.

1.

I recently needed to buy a car.

I did a few things to make a good decision. I read a lot of reviews of various cars, by various authors. I also looked at hard numbers: MPG, cargo volume, number of recalls, and so on. I also test-drove a few, although in the end that counted for relatively little in my decision.

It became pretty evident that the concerns of a typical car expert were quite different than my own concerns. For instance, a typical car expert, having been in a large number of cars, is quite sensitive to a kind of "interior quality" -- the perceived beauty and niceness-to-touch-and-look-at of materials used on the dashboard, steering wheel, and so on. This has relatively little to do with the actual reliability of the car (as far as I can tell) and so is something to which I am relatively indifferent.

Similarly, a typical car expert cares a great deal about the acceleration and handling of a car. I am certainly not indifferent to the acceleration of a car. I think I might have opinions about what different vehicles feel like going around corners, when I actively try to have the opinions. But my concerns once again differ from an expert quite a lot.

So overall ratings -- n points out of 10 -- from experts are not much good to me. Expert opinions can be useful to me, in that they can confirm various facts about a car: whether a car will be reliable, what the cost of repair would be, and so on. Even so, these are still at least a little questionable, given that car experts are as surely subject to halo effect bias as any other human.

2.

Many movies have enormous divides between the average moviegoer rating and the average reviewer rating.

Some movies are infamous for this, such as The Last Jedi, which apparently 90% of critics but only 42% of the audience liked. Similarly for Darren Arronofsky's Noah.

This makes a lot of sense. Movie reviewers watch a lot of movies. So it is likely that they will place a premium on novelty and originality -- tropes that normal moviegoers might enjoy are likely to seem extremely old and tired to a normal movie reviewer. They are able to attend to various fine-grained filmic details to which the average moviegoer is often indifferent. The process of becoming a movie "expert" has changed their values away from the values of the typical moviegoer.

So, once again, potential moviegoers have every reason to distrust the overall movie ratings that movie critics produce. And because it is difficult for the average critic to separate their overall opinion about a movie from evaluation of individual parts of a movie, potential moviegoers also have reason to distrust more fine-grained analysis from movie critics.

3.

Both of the following are probably true.

  • Those likely to become expert in some field have (before becoming an expert) values different from those who are not likely to become expert in that field. (Car people / Shakespeare people / machine learning people were already kinda different before becoming car people / Shakespeare people / machine learning people.)
  • The process of becoming an expert in some field changes your values regarding it. (Movie people / math people / philosophy people are changed by the process of becoming movie people / math people / philosophy people.)

4.

Let me focus on the second from above.

When you read Buddhist works, they explain the process of becoming a Buddhist as a process of value-transformation.

People often start Buddhist meditation for various reasons. You might want to get rid of stress, so you can work in your startup better. Or you might be worried about an illness you have, and think that meditation will help you get rid of that worry. Or you might simply want the feeling-nice thing you maybe got by accident while sitting still in nature once.

Buddhists say that this kind of desire is both (1) totally fine and (2) something you'll eventually transcend. After meditating for longer, eventually these kinds of desires will be replaced by the pursuit of enlightenment, something you increasingly taste as you meditate for longer and longer. You only could start with these other desires, in fact, but the nature of the process still produces new desires.

Expertise in many things like that. You start to program because you want to mod a video game, but eventually you find that you enjoy programming without trying to mod video games. You start maths because you have to if you want to graduate, but eventually you find the abstract beauty of proofs compelling on its own.

This process seems inevitable. Human minds are not compartmentalized reinforcement-learning algorithms, in which we can refine our predictive model and state representation to finer and finer degrees while keeping the same reward function. The reward function changes alongside everything else; the process of becoming expert in a thing is a refinement of tastes and appetites just as much as it is a refinement of knowledge.

In other words, the Orthogonality Thesis is possibly true about abstract minds but very likely false of human minds.

5.

This means that, when asking for advice from an expert, you should consider whether the fact of their expertise means that their values are likely to diverge from yours in a relevant fashion. For instance:

  • When asking a programmer for advice on a software engineering problem, it is always possible that their notions of extensibility, beauty, and reliability will lead them to give you a solution not well-suited for your needs. (As is well-known among programmers.)

  • When asking an epidemiologist for advice on fighting a plague, it is possible that their advice will result in more harm than it prevents, because they are used to ignoring economic effects in their analysis.

To try and avoid this kind of thing, you must of course try to communicate your own values to them. Thus, when asking a programmer for advice, you should try to communicate the precise requirements, constraints, and goals of the project in which you are involved; when asking an epidemiologist for advice, you can ask them to consider economic damage as well as the spread of disease. In machine-learning terms, you provide them a new reward function, and ask them to provide an optimized policy for that reward function using their hard-acquired model.

In a complex situation though, this is probably only a partial fix, for several reasons.

First, it is somewhere between hard and impossible to completely specify a new reward function for someone, as the Value Alignment problem makes clear. The other person will inevitable need to fill in the holes in your verbal description with their own assumptions, assumptions that can be wrong.

And second, to repeat myself, humans do not store their knowledge in models of the world, abstracted from tastes and preferences. Working out the action-value function for a new reward in an old model can be a very laborious task, and in attempting to do any expert will likely lean on cached values, on preferences, and so on. Which means that they advice will not completely abstract from their own set of values.

So friction and other-optimizing when taking advice from experts is, in part, inevitable, given the kind of thing that humans are.