The right to make bad choices is not absolute

The primary problem that most people have with Mayor Bloomberg’s ban on large sodas (and the ACA for that matter) is that it turns the state into a “nanny.” The problem with that argument is primarily that the state already does that.

Common examples include things like seat belts for the driver. Most people agree that this is a good idea, but at its core, the goal is simply protect a person from themselves. There is no “public” interest that has to be protected, just the people from themselves.

The problem is that people have a terrible idea of what is good for them, and more specifically they have consistent problems thinking for the long term. When this manifests itself in the savings rate, credit card debt, and seat belts for that matter, the state steps in (as much as legislators are comfortable with anyway) and they protect people from themselves.

The case of health insurance and the soda ban is even more clear than the above examples. In these cases, the government the government is not only protecting people from themselves (people in the aggregate, not necessarily every individual) but people from other people. Assuming that soda really does make people obese (which seems to be well documented (cited by 1670)) it has real costs to society. Since many people that are obese are concentrated in the lower economic strata, Medicaid, and Social Security disproportionately have to pay for the additional treatment that is required.

The money does not come out of the ether. It comes from people’s pockets that are not obese, as well as those that are (and disproportionately those that are not). To claim that the state should both pay for treatment for a large number of self-inflicted ailments (again, in the aggregate) at the same time as tying its hands for fixing the root cause is absurd (a legitimate point of view is to say that the government should do neither of these things, but that view gets much less popular support).

To the claim that the government could pass a law that says that people have to eat an apple a day, I have one response: Don’t elect such people

Advertisements

Unethical human experimentation

In order for researchers to do human experimentation, they (in almost all cases) the research proposal must pass an ERB (ethical review board). This seems like a good idea, to prevent abuses.

More important than the ERB review is the consent form that all participants of the study get. In theory, it explains all of the risks, rewards, side effects, etc so that the person knows exactly what they are getting into when they agree to be part of the study. Problems can arise when people are willing to consent to something, but the ERB feels it is unethical. The first case of informed consent was with Yellow Fever. Healthy people were willing to get themselves infected with a disease was lethal about 30% of the time so that the disease could be better understood and fought. Such a thing would most probably not be possible today because of an ERB. Similar experiments that would be useful are things that would determine is possible carcinogens are actually carcinogenic, faster turnaround for drug development, and any disease about which not that much is known (or usually comes with complications).

If people are willing to participate in these kinds of studies, why should we stop them?

Why many arguments are not worth having

Many arguments fall into one of three categories:

  1. Objective and known (it is daytime now)
  2. Subjective (vanilla is better than chocolate)
  3. We don’t have all the data, so we must reason with what data we do have

The only one worth arguing is number three. The result of the first one is simply to check (that could be a study or studies, looking outside, or anything similar). There is a right answer, go find it.

For the second one, you will never be able to convince a chocolate lover who is also a vanilla hater than vanilla is better than chocolate. There is nothing to start from, as its all subjective. Changing the subject to vanilla is healthier moves it into category one.

The third one is actually reasonable to discuss, but before attacking someone’s position, make sure that it really does fall into category three (and then keep in mind that just because it is a topic that can be argued does not mean that facts go out the window).

The government should be more open (on all levels)

I recently tried to get some data (that is already public, at least in theory) from the government, to take an empirical view at what legislators do in Congress (who sponsors bills, votes, speaks on the floor, and how that changes over time (seniority and by decade). It is basically impossible as far as I can see (see below for the full story, but its not that interesting).

The government as a whole does a poor job of providing data to the public in general. Even  when they try and do a good job, their websites are generally crummy, missing features that many people have come to expect from large entities (good search, layout, etc). Additionally, they make it very hard to query data (as a silly example, find the titles of all bills longer than 1000 words that dealt with corn explicitly). You can’t even begin to get an answer to the question on government web sites. To begin with, not all bills are searchable (only recent ones), and you can’t search across sessions. Length is not a queryable field, and for that matter, there are not fields, its just raw text. So this seemingly simple question that would take a few minutes to hack together if it was in some SQL database somewhere is suddenly difficult (or even downright impossible)

While Congress is large enough that it has at least some data that is available online, most parts of the government do not. If you are a researcher that wants to study town hall meetings across the country, too bad. Raw data is going to be painful to come by. The sad part is, all the data is already digital (in almost all cases) or can easily be made digital (it was typed, and therefore OCR will probably work very well and cheaply). Additionally  every state has some analog of the Freedom of Information Act, where if someone requests the data, and it does not contain anything that could be harmful/confidential, it must be released. All this does it make data less useful.

While I don’t think that data is a panacea for every problem, it does make rational discussion possible in a way that simple isn’t possible otherwise. If you know that particular communities have seen dramatic drops in crime rates (data) you can look for commonalities in the town hall minutes (more data) to see what changed all automatically (there are algorithms and programs that can do a decent job at this).

The argument can move away from “Here is what makes sense to me” or “Here is what I see in the small amount of communities that I was able to survey” to here is what we know after comparing a large number of communities (some significant fraction of them, or even just all of them).

I am not saying that the government should do all this research for us, or even run all the queries for us, but merely that if the data is already accessible (if you are willing to put in the time) it should be made centrally available, and downloadable free of charge so that real research is possible/made easier.


What inspired this:

I recently tried to get a digital copy of the Congressional Record so I could attempt to figure  what is done in Congress and who does it (do freshmen congressmen propose more bills? Vote more? Have better attendance? Speak on the house floor for longer? Does it drop off in a predictable way (or rise as time goes on)?) and I was stunned at the difficultly of getting the data. The GPO (government printing office) whose motto is “Keeping America Informed” has a “cannot retrieve page” error, and in case you wanted to see what the page looked like before, you can’t because they have a robot.txt file that tells Google (and any other search engines) something like “We would really appreciate it if you didn’t look at out stuff in these really broad categories.”

If you want, they make available a subscription to a bi-weekly publication that summarizes the Congressional Record for you (now available in microfiche as well as paper) but thats really not as useful. They have a searchable version of the Congressional Record, but its basically useless, as you can only search one session of congress at a time, the search is basically just a word search (does this contain the word I am looking for) has no ability to sort (other than by date), and only goes back around 20 years (but they do have the worst index I’ve seen a long time that goes back to 1983). If you are looking for a relevant result on a bunch of common words, you are out of luck.

I can’t even find a print copy floating around that is anywhere near reasonably complete (or even has anything that is complete from 1983 to the present). I figure that the Library of Congress probably has a copy, so I go to thier website, search for Congressional Record and I get Congress speaks on Nixon’s visit to mainland China. The only result that even has to do with a copy of the Congressional Record is buried at item 34, and is from 1909, and just contains a list of the volumes. Good job congress.

Rational discourse is missing in Congress

There are really three different types of discussions that lead to legislative results, values discussions and what I would call objective discussions. Values discussions are things like should the federal government recognize gay marriage. This is really a question of “fairness” (subjective), “equality” (subjective) and “morality” (subjective). On the flip side are things like global warming. Global warming is a fact based thing. It either is happening or is isn’t, and the only people that are qualified to make that determination are scientists and researchers.

Most issues that Congress deals with are a mixture of the two. Take taxes as an example. The goal is to fund some level of government (though of course how much and on what is debatable). The values come in when one is deciding one items should get taxed. For example, should bullets be taxed, or cars, or food. The objective bit is to examine the effect of the taxes. Is it not a matter of opinion if a tax on cars will vary the amount of cars sold by a given percentage. Its a question for economists. Now, there will probably be disagreements between economists, but that is reasonable. What is not reasonable is express an economic viewpoint that has no basis in economic literature. For republicans to say that “tax cuts pay for themselves” is interesting, but the science is not behind it. It should be an unacceptable viewpoint. When the CBO came out a report saying so, republicans tried to quash it. Though the retraction was later retracted, the point is that politicians (I don’t think that its just republicans) are predisposed to try and discredit research that disagree with their positions.

It would be nice if bills were actually debated in a rational sense. Instead, what we get is variations on “If you give me some piece of pork for my district, I’ll think about signing” and “It might make the other party look semi-decent, and so I won’t sign.” If you look back in history for great legislators, we find people like Henry Clay (aka “the Great Compromiser”) who were known for (surprise surprise) compromise and getting things done that crossed party lines.

There a lot of problems with this country, and those problems need dealing with. We don’t need more manufactured crises that get solved at the last minute only create more later. If legislators were forced to defend their views on a strictly logical basis, the quality of the discourse would dramatically improve. That does not mean that they are not allowed to make value calls (like the value of a human life is infinite for example) but it does mean that if you do, you have to be consistent (no-one should ever be allowed to create a product that can possibly every kill someone in any way, i.e. no knives, etc).

Medical school is overly costly and unnecessary (at least in most cases)

The basic idea is as follows:

In order to practice medicine in the US, one must first pass their Board examinations (among other things). One is not even allowed to take the test unless they have gone to a medical school that is accredited.

To me, this is indicative of a broken process. If the test is meaningful, then it should be able to stand on its own and say “Here is what student X knows about a particular subject.” There should not a category of knowledge that is required to be taught in medical school, but not important enough that it should be untested. In other words, the test should be made longer, and the requirement to have gone to medical school dropped.

The reason this would make a difference is that especially in the third and fourth years of medical school, most of the time is spent exploring areas of medicine that a student may not be interested in, and will never have any contact with every again. This takes approximately two years (see here for an example).

Separate and distinct from that time that is spent on areas of medicine that may be irrelevant to the student, by the time that a student is actually practicing medicine, this most probably is reflected in the difference between that is needed to pass the Board examination Step 1 (lots of technical knowledge in many areas of science) and the recertification exam (which tests clinical knowledge in a very specific area).

It seems that all the technical knowledge that first and second year medical school students is not worth retaining (in the eyes of the recertification boards). If that is the case, why spend all this time learning it?

It seems clear that the doctors everywhere seem to be clear that in order to be a qualified doctor, the following is necessary:

  1. Hands on knowledge (residency) in a particular area
  2. Clinical knowledge (in a particular area)

Everything else is not crucial. Now, in some cases, it might be very useful for researchers and the like, but for the vast majority of doctors, most of the material is useless (and probably quickly forgotten). No practicing (or very very few) doctors need to know pharmacology. Reading the drug interactions is probably more reasonable (do people really think that doctors download a model of all the drugs that they prescribe and analyze the structure to determine all the possible drug interactions?) and less error prone. A dermatologist probably does not need to take an anatomy course.

To all the people that say “everything in the body interacts with everything else, and therefore you have to know everything” that would sound a lot more convincing if it was tested after graduation.

Now, I would agree that medical school (as it currently exists) is useful for a small subset of people that don’t have any idea what they want to do. A large subset of the first and second year curriculum would probably be very useful to researchers, people going for their PHDs, and the like, but just to practice medicine, does not seem to useful (even in the eyes of the recertification professionals).

I think it goes without saying that a medical school system that is more streamlined would be cheaper and faster, and who knows, if only the relevant material is focused on, perhaps more relevant material can be taught (and taught better).

Feel free to comment below