Archive for the ‘statistics’Category

A massive trial, a huge publication delay, and enormous questions

It’s been called the “largest clinical* trial ever”: DEVTA (Deworming and Enhanced ViTamin A supplementation), a study of Vitamin A supplementation and deworming in over 2 million children in India, just published its results. “DEVTA” may mean “deity” or “divine being” in Hindi but some global health experts and advocates will probably think these results come straight from the devil. Why? Because they call into question — or at least attenuate — our estimates of the effectiveness of some of the easiest, best “bang for the buck” interventions out there.

Data collection was completed in 2006, but the results were just published in The Lancet. Why the massive delay? According to the accompany discussion paper, it sounds like the delay was rooted in very strong resistance to the results after preliminary outcomes were presented at a conference in 2007. If it weren’t for the repeated and very public shaming by the authors of recent Cochrane Collaboration reviews, we might not have the results even today. (Bravo again, Cochrane.)

So, about DEVTA. In short, this was a randomized 2×2 factorial trial, like so:

The results were published as two separate papers, one on Vitamin A and one on deworming, with an additional commentary piece:

The controversy is going to be more about what this trial didn’t find, rather than what they did: the confidence interval on the Vitamin A study’s mortality estimate (mortality ratio 0.96, 95% confidence interval of 0.89 to 1.03) is consistent with a mortality reduction as large as 11%, or as much as a 3% increase. The consensus from previous Vitamin A studies was mortality reductions of 20-30%, so this is a big surprise. Here’s the abstract to that paper:

Background

In north India, vitamin A deficiency (retinol <0·70 μmol/L) is common in pre-school children and 2–3% die at ages 1·0–6·0 years. We aimed to assess whether periodic vitamin A supplementation could reduce this mortality.

Methods

Participants in this cluster-randomised trial were pre-school children in the defined catchment areas of 8338 state-staffed village child-care centres (under-5 population 1 million) in 72 administrative blocks. Groups of four neighbouring blocks (clusters) were cluster-randomly allocated in Oxford, UK, between 6-monthly vitamin A (retinol capsule of 200 000 IU retinyl acetate in oil, to be cut and dripped into the child’s mouth every 6 months), albendazole (400 mg tablet every 6 months), both, or neither (open control). Analyses of retinol effects are by block (36 vs36 clusters).

The study spanned 5 calendar years, with 11 6-monthly mass-treatment days for all children then aged 6–72 months.  Annually, one centre per block was randomly selected and visited by a study team 1–5 months after any trial vitamin A to sample blood (for retinol assay, technically reliable only after mid-study), examine eyes, and interview caregivers. Separately, all 8338 centres were visited every 6 months to monitor pre-school deaths (100 000 visits, 25 000 deaths at ages 1·0–6·0 years [the primary outcome]). This trial is registered at ClinicalTrials.gov, NCT00222547.

Findings

Estimated compliance with 6-monthly retinol supplements was 86%. Among 2581 versus 2584 children surveyed during the second half of the study, mean plasma retinol was one-sixth higher (0·72 [SE 0·01] vs 0·62 [0·01] μmol/L, increase 0·10 [SE 0·01] μmol/L) and the prevalence of severe deficiency was halved (retinol <0·35 μmol/L 6% vs13%, decrease 7% [SE 1%]), as was that of Bitot’s spots (1·4% vs3·5%, decrease 2·1% [SE 0·7%]).

Comparing the 36 retinol-allocated versus 36 control blocks in analyses of the primary outcome, deaths per child-care centre at ages 1·0–6·0 years during the 5-year study were 3·01 retinol versus 3·15 control (absolute reduction 0·14 [SE 0·11], mortality ratio 0·96, 95% CI 0·89–1·03, p=0·22), suggesting absolute risks of death between ages 1·0 and 6·0 years of approximately 2·5% retinol versus 2·6% control. No specific cause of death was significantly affected.

Interpretation

DEVTA contradicts the expectation from other trials that vitamin A supplementation would reduce child mortality by 20–30%, but cannot rule out some more modest effect. Meta-analysis of DEVTA plus eight previous randomised trials of supplementation (in various different populations) yielded a weighted average mortality reduction of 11% (95% CI 5–16, p=0·00015), reliably contradicting the hypothesis of no effect.

Note that instead of just publishing these no-effect results and leaving the meta-analysis to a separate publication, the authors go ahead and do their own meta-analysis of DEVTA plus previous studies and report that — much attenuated, but still positive — effect in their conclusion. I think that’s a fair approach, but also reveals that the study’s authors very much believe there are large Vitamin A mortality effects despite the outcome of their own study!

[The only media coverage I’ve seen of these results so far comes from the Times of India, which includes quotes from the authors and Abhijit Banerjee.]

To be honest, I don’t know what to make of the inconsistency between these findings and previous studies, and am writing this post in part to see what discussion it generates. I imagine there will be more commentaries on these findings over the coming months, with some decrying the results and methodologies and others seeing vindication in them. In my view the best possible outcome is an ongoing concern for issues of external validity in biomedical trials.

What do I mean? Epidemiologists tend to think that external validity is less of an issue in randomized trials of biomedical interventions — as opposed to behavioral, social, or organizational trials — but this isn’t necessarily the case. Trials of vaccine efficacy have shown quite different efficacy for the same vaccine (see BCG and rotavirus) in different locations, possibly due to differing underlying nutritional status or disease burdens. Our ability to interpret discrepant findings can only be as sophisticated as the available data allows, or as sophisticated as allowed by our understanding of the biological and epidemiologic mechanisms that matter on the pathway from intervention to outcome. We can’t go back in time and collect additional information (think nutrition, immune response, baseline mortality, and so forth) on studies far in the past, but we can keep such issues in mind when designing trials moving forward.

All that to say, these results are confusing, and I look forward to seeing the global health community sort through them. Also, while the outcomes here (health outcomes) are different from those in the Kremer deworming study (education outcomes), I’ve argued before that lack of effect or small effects on the health side should certainly influence our judgment of the potential education outcomes of deworming.

*I think given the design it’s not that helpful to call this a ‘clinical’ trial at all – but that’s another story.

20

03 2013

On regressions and thinking

Thesis: thinking quantitatively changes the way we frame and answer questions in ways we often don’t notice.

James Robinson, of Acemoglu and Robinson fame (ie, Why Nations Fail@whynationsfailColonial Origins; Reversal of Fortune, and so forth), gave a talk at Princeton last week. It was a good talk, mostly about Why Nations Fail. My main thought during his presentation was that it’s simply very difficult to develop a parsimonious theory that covers something as complicated as the long-term economic and political development of the entire world! As Robinson said (quoting someone else), in social science you can say “more and more about less and less, or less and less about more and more.”

The talk was followed by some great discussion where several of the tougher questions came from sociologists and political economists. I think it’s safe to say that a lot of the skepticism of the Why Nations Fail thesis is centered around the beef that East Asian economics, and especially China, don’t fit neatly into it. A&R argue here on their blog — not to mention in their book, which I’ve alas only had time to skim — that China is not an exception to their theory, but I think that impression is still fairly widespread.

But my point isn’t about the extent to which China fits into the theory (that’s another debate altogether); it’s about what it means if or when China doesn’t fit into the theory. Is that a major failure or a minor one?  I think different answers to that question are ultimately rooted in a difference of methodological cultures in the social science world.

As social science becomes more quantitative, our default method for thinking about a subject can shift, and we might not even notice that it’s happening. For example, if your main form of evidence for a theory is a series of cross-country regressions, then you automatically start to think of countries as the unit of analysis, and, importantly, as being more or less equally weighted. There are natural and arguably inevitable reasons why this will be the case: states are the clearest politicoeconomic units, and even if they weren’t they’re simply the unit for which we have the most data. While you might (and almost certainly should!) weight your data points by population if you were looking at something like health or individual wealth or well-being, it makes less sense when you’re talking about country-level phenomena like economic growth rates. So you end up seeing a lot of arguments made with scatterplots of countries and fitted lines — and you start to think that way intuitively.

When we switch back to narrative forms of thinking, this is less true: I think we all agree that all things being equal a theory that explains everything except Mauritius is better than a theory that explains everything except China. But it’s a lot harder to think intuitively about these things when you have a bunch of variables in play at the same time, which is one reason why multiple regressions are so useful. And between the extremes of weighting all countries equally and weighting them by population are a lot of potentially more reasonable ways of balancing the two concerns, that unfortunately would involve a lot of arbitrary decisions regarding weighting…

This is a thought I’ve been stewing on for a while, and it’s reinforced whenever I hear the language of quantitative analysis working its way into qualitative discussions — for instance, Robinson said at one point that “all that is in the error term,” when he wasn’t actually talking about a regression. I do this sort of thing too, and don’t think there’s anything necessarily wrong with it — until there is.  When questioned on China, Robinson answered briefly and then transitioned to talking about the Philippines, rather than just concentrating on China. If the theory doesn’t explain China (at least to the satisfaction of many), a nation of 1.3 billion, then explaining a country of 90 million is less impressive. How impressive you find an argument depends in part on the importance you ascribe to the outliers, and that depends in part on whether you were trained in the narrative way of thinking, where huge countries are hugely important, or the regression way of thinking, where all countries are equally important units of analysis.

[The first half of my last semester of school is shaping up to be much busier than expected — my course schedule is severely front-loaded — so blogging has been intermittent. Thus I’ll try and do more quick posts like this rather than waiting for the time to flesh out an idea more fully.]

25

02 2013

Why did HIV decline in Uganda?

That’s the title of an October 2012 paper (PDF) by Marcella Alsan and David Cutler, and a longstanding, much-debated question in global health circles . Here’s the abstract:

Uganda is widely viewed as a public health success for curtailing its HIV/AIDS epidemic in the early 1990s. To investigate the factors contributing to this decline, we build a model of HIV transmission. Calibration of the model indicates that reduced pre-marital sexual activity among young women was the most important factor in the decline. We next explore what led young women to change their behavior. The period of rapid HIV decline coincided with a dramatic rise in girls’ secondary school enrollment. We instrument for this enrollment with distance to school, conditional on a rich set of demographic and locational controls, including distance to market center. We find that girls’ enrollment in secondary education significantly increased the likelihood of abstaining from sex. Using a triple-difference estimator, we find that some of the schooling increase among young women was in response to a 1990 affirmative action policy giving women an advantage over men on University applications. Our findings suggest that one-third of the 14 percentage point decline in HIV among young women and approximately one-fifth of the overall HIV decline can be attributed to this gender-targeted education policy.

This paper won’t settle the debate over why HIV prevalence declined in Uganda, but I think it’s interesting both for its results and the methodology. I particularly like the bit on using distance from schools and from market center in this way, the idea being that they’re trying to measure the effect of proximity to schools while controlling for the fact that schools are likely to be closer to the center of town in the first place.

The same paper was previously published as an NBER working paper in 2010, and it looks to me as though the addition of those distance-to-market controls was the main change since then. [Pro nerd tip: to figure out what changed between two PDFs, convert them to Word via pdftoword.com, save the files, and use the ‘Compare > two versions of a document’ feature in the Review pane in Word.]

Also, a tip of the hat to Chris Blattman, who earlier highlighted Alsan’s fascinating paper (PDF) on TseTse flies. I was impressed by the amount of biology in the tsetse fly paper; a level of engagement with non-economic literature that I thought was both welcome and unusual for an economics paper. Then I realized it makes sense given that the author has an MD, an MPH, and a PhD in economics. Now I feel inadequate.

03

01 2013

Alwyn Young just broke your regression

Alwyn Young — the same guy whose paper carefully accounting for growth in East Asian was popularized by Krugman and sparked an enormous debate — has been circulating a paper on African growth rates. Here’s the 2009 version (PDF) and October 2012 version. The abstract of the latter paper:

Measures of real consumption based upon the ownership of durable goods, the quality of housing, the health and mortality of children, the education of youth and the allocation of female time in the household indicate that sub-Saharan living standards have, for the past two decades, been growing about 3.4 to 3.7 percent per annum, i.e. three and a half to four times the rate indicated in international data sets. (emphasis added)

The Demographic and Health Surveys are large-scale nationally-representative surveys of health, family planning, and related modules that tend to ask the same questions across different countries and over large periods of time. They have major limitations, but in the absence of high-quality data from governments they’re often the best source for national health data. The DHS doesn’t collect much economic data, but they do ask about ownership of certain durable goods (like TVs, toilets, etc), and the answers to these questions are used to construct a wealth index that is very useful for studies of health equity — something I’m taking advantage of in my current work. (As an aside, this excellent report from Measure DHS (PDF) describes the history of the wealth index.)

What Young has done is to take this durable asset data from many DHS surveys and try to estimate a measure of GDP growth from actually-measured data, rather than the (arguably) sketchier methods typically used to get national GDP numbers in many African countries. Not all countries are represented at any given point in time in the body of DHS data, which is why he ends up with a very-unbalanced panel data set for “Africa,” rather than being able to measure growth rates in individual countries. All the data and code for the paper are available here.

Young’s methods themselves are certain to spark ongoing debate (see commentary and links from Tyler Cowen and Chris Blattman), so this is far from settled — and may well never be. The takeaway is probably not that Young’s numbers are right so much as that there’s a lot of data out there that we shouldn’t trust very much, and that transparency about the sources and methodology behind data, official or not, is very helpful. I just wanted to raise one question: if Young’s data is right, just how many published papers are wrong?

There is a huge literature on cross-country growth ‘s empirics. A Google Scholar search for “cross-country growth Africa” turns up 62,400 results. While not all of these papers are using African countries’ GDPs as an outcome, a lot of them are. This literature has many failings which have been duly pointed out by Bill Easterly and many others, to the extent that an up-and-coming economist is likely to steer away from this sort of work for fear of being mocked. Relatedly, in Acemoglu and Robinson’s recent and entertaining take-down of Jeff Sachs, one of their insults criticisms is that Sachs only knows something because he’s been running “kitchen sink growth regressions.”

Young’s paper just adds more fuel to that fire. If African GDP growth has been 3 1/2 to 4 times greater than the official data says, then every single paper that uses the old GDP numbers is now even more suspect.

Bad pharma

Ben Goldacre, author of the truly excellent Bad Science, has a new book coming out in January, titled Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients

Goldacre published the foreword to the book on his blog here. The point of the book is summed up in one powerful (if long) paragraph. He says this (emphasis added):

So to be clear, this whole book is about meticulously defending every assertion in the paragraph that follows.

Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug’s true effects. Regulators see most of the trial data, but only from early on in its life, and even then they don’t give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion. In their forty years of practice after leaving medical school, doctors hear about what works through ad hoc oral traditions, from sales reps, colleagues or journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are even owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all. These are ongoing problems, and although people have claimed to fix many of them, for the most part, they have failed; so all these problems persist, but worse than ever, because now people can pretend that everything is fine after all.

If that’s not compelling enough already, here’s a TED talk on the subject of the new book:

02

10 2012

Hunger Games critiques

My Hunger Games survival analysis post keeps getting great feedback. The latest anonymous comment:

Nice effort on the analysis, but the data is not suitable for KM and Cox. In KM, Cox and practically almost everything that requires statistical inference on a population, your variable of interest should be in no doubt independent from sample unit to sample unit.

Since your variable of interest is life span during the game where increasing ones chances in a longer life means deterring another persons lifespan (i.e. killing them), then obviously your variable of interest is dependent from sample unit to sample unit.

Your test for determining whether the gamemakers rig the selection of tributes is inappropriate, since the way of selecting tributes is by district. In the way your testing whether the selection was rigged, you are assuming that the tributes were taken as a lot regardless of how many are taken from a district. And the way you computed the expected frequency assumes that the number of 12 year olds equals the number of 13 year olds and so on when it is not certain.

Thanks for the blog. It was entertaining.

And there’s a lot more in the other comments.

20

09 2012

Three new podcasts to follow

  • DataStori.es, a podcast on data visualization.
  • Pangea, a podcast on foreign affairs —  including “humanitarian issues, health, development, etc.” created by Jaclyn Schiff (@J_Schiff).
  •  Simply Statisics, a podcast by the guys behind the blog of the same name.

Another type of mystification

A long time ago (in years, two more than the product of 10 and the length of a single American presidential term) John Siegfried wrote this First Lesson in Econometrics (PDF). It starts with this:

Every budding econometrician must learn early that it is never in good taste to express the sum of the two quantities in the form: “1 + 1 = 2”.

… and just goes downhill from there. Read it.

(I wish I remembered where I first saw this so I could give them credit.)

12

09 2012

Why we should lie about the weather (and maybe more)

Nate Silver (who else?) has written a great piece on weather prediction — “The Weatherman is Not a Moron” (NYT) — that covers both the proliferation of data in weather forecasting, and why the quantity of data alone isn’t enough. What intrigued me though was a section at the end about how to communicate the inevitable uncertainty in forecasts:

…Unfortunately, this cautious message can be undercut by private-sector forecasters. Catering to the demands of viewers can mean intentionally running the risk of making forecasts less accurate. For many years, the Weather Channel avoided forecasting an exact 50 percent chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40. In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.) For years, when the Weather Channel said there was a 20 percent chance of rain, it actually rained only about 5 percent of the time.

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

My thought when reading this was that there are actually two different reasons why you might want to systematically adjust reported percentages ((ie, fib a bit) when trying to communicate the likelihood of bad weather.

But first, an aside on what public health folks typically talk about when they talk about communicating uncertainty: I’ve heard a lot (in classes, in blogs, and in Bad Science, for example) about reporting absolute risks rather than relative risks, and about avoiding other ways of communicating risks that generally mislead. What people don’t usually discuss is whether the point estimates themselves should ever be adjusted; rather, we concentrate on how to best communicate whatever the actual values are.

Now, back to weather. The first reason you might want to adjust the reported probability of rain is that people are rain averse: they care more strongly about getting rained on when it wasn’t predicted than vice versa. It may be perfectly reasonable for people to feel this way, and so why not cater to their desires? This is the reason described in the excerpt from Silver’s article above.

Another way to describe this bias is that most people would prefer to minimize Type II Error (false negatives) at the expense of having more Type I error (false positives), at least when it comes to rain. Obviously you could take this too far — reporting rain every single day would completely eliminate Type II error, but it would also make forecasts worthless. Likewise, with big events like hurricanes the costs of Type I errors (wholesale evacuations, cancelled conventions, etc) become much greater, so this adjustment would be more problematic as the cost of false positives increases. But generally speaking, the so-called “wet bias” of adjusting all rain prediction probabilities upwards might be a good way to increase the general satisfaction of a rain-averse general public.

The second reason one might want to adjust the reported probability of rain — or some other event — is that people are generally bad at understanding probabilities. Luckily though, people tend to be bad about estimating probabilities in surprisingly systematic ways! Kahneman’s excellent (if too long) book Thinking, Fast and Slow covers this at length. The best summary of these biases that I could find through a quick Google search was from Lee Merkhofer Consulting:

 Studies show that people make systematic errors when estimating how likely uncertain events are. As shown in [the graph below], likely outcomes (above 40%) are typically estimated to be less probable than they really are. And, outcomes that are quite unlikely are typically estimated to be more probable than they are. Furthermore, people often behave as if extremely unlikely, but still possible outcomes have no chance whatsoever of occurring.

The graph from that link is a helpful if somewhat stylized visualization of the same biases:

In other words, people think that likely events (in the 30-99% range) are less likely to occur than they are in reality, that unlike events (in the 1-30% range) are more likely to occur than they are in reality, and extremely unlikely events (very close to 0%) won’t happen at all.

My recollection is that these biases can be a bit different depending on whether the predicted event is bad (getting hit by lightning) or good (winning the lottery), and that the familiarity of the event also plays a role. Regardless, with something like weather, where most events are within the realm of lived experience and most of the probabilities lie within a reasonable range, the average bias could probably be measured pretty reliably.

So what do we do with this knowledge? Think about it this way: we want to increase the accuracy of communication, but there are two different points in the communications process where you can measure accuracy. You can care about how accurately the information is communicated from the source, or how well the information is received. If you care about the latter, and you know that people have systematic and thus predictable biases in perceiving the probability that something will happen, why not adjust the numbers you communicate so that the message — as received by the audience — is accurate?

Now, some made up numbers: Let’s say the real chance of rain is 60%, as predicted by the best computer models. You might adjust that up to 70% if that’s the reported risk that makes people perceive a 60% objective probability (again, see the graph above). You might then adjust that percentage up to 80% to account for rain aversion/wet bias.

Here I think it’s important to distinguish between technical and popular communication channels: if you’re sharing raw data about the weather or talking to a group of meteorologists or epidemiologists then you might take one approach, whereas another approach makes sense for communicating with a lay public. For folks who just tune in to the evening news to get tomorrow’s weather forecast, you want the message they receive to be as close to reality as possible. If you insist on reporting the ‘real’ numbers, you actually draw your audience further from understanding reality than if you fudged them a bit.

The major and obvious downside to this approach is that people know this is happening, it won’t work, or they’ll be mad that you lied — even though you were only lying to better communicate the truth! One possible way of getting around this is to describe the numbers as something other than percentages; using some made-up index that sounds enough like it to convince the layperson, while also being open to detailed examination by those who are interested.

For instance, we all the heat index and wind chill aren’t the same as temperature, but rather represent just how hot or cold the weather actually feels. Likewise, we could report some like “Rain Risk” or “Rain Risk Index” that accounts for known biases in risk perception and rain aversion. The weather man would report a Rain Risk of 80%, while the actual probability of rain is just 60%. This would give us more useful information for the recipients, while also maintaining technical honesty and some level of transparency.

I care a lot more about health than about the weather, but I think predicting rain is a useful device for talking about the same issues of probability perception in health for several reasons. First off, the probabilities in rain forecasting are much more within the realm of human experience than the rare probabilities that come up so often in epidemiology. Secondly, the ethical stakes feel a bit lower when writing about lying about the weather rather than, say, suggesting physicians should systematically mislead their patients, even if the crucial and ultimate aim of the adjustment is to better inform them.

I’m not saying we should walk back all the progress we’ve made in terms of letting patients and physicians make decisions together, rather than the latter withholding information and paternalistically making decisions for patients based on the physician’s preferences rather than the patient’s. (That would be silly in part because physicians share their patients’ biases.) The idea here is to come up with better measures of uncertainty — call it adjusted risk or risk indexes or weighted probabilities or whatever — that help us bypass humans’ systematic flaws in understanding uncertainty.

In short: maybe we should lie to better tell the truth. But be honest about it.

When randomization is strategic

Here’s a quote from Tom Yates on his blog Sick Populations about a speech he heard by Rachel Glennerster of J-PAL:

Glennerster pointed out that the evaluation of PROGRESA, a conditional cash transfer programme in Mexico and perhaps the most famous example of randomised evaluation in social policy, was instigated by a Government who knew they were going to lose the next election. It was a way to safeguard their programme. They knew the next Government would find it hard to stop the trial once it was started and were confident the evaluation would show benefit, again making it hard for the next Government to drop the programme. Randomisation can be politically advantageous.

I think I read this about Progresa / Oportunidades before but had forgotten it, and thus it’s worth re-sharing. The way in which Progresa was randomized (different areas were stepped into the program, so there was a cohort of folks who got it later than others, but all the high need areas got it within a few years) made this more politically feasible as well. I think this situation, in which a government institutes a study of a program to keep it alive through subsequent changes of government, will probably be a less common tactic than its opposite, in which a government designs an evaluation of a popular program that a) it thinks doesn’t work, b) it wants to cut, and c) the public otherwise likes, just to prove that it should be cut — but only time will tell.

16

08 2012