[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Wednesday, October 28, 2015

Change of scenery

I should have reported this a while ago, but better late than never: I have moved to a new job, at a new institution, in a new country. In fact since the 1st of October this year I have been employed by the Institute for Cosmology and Gravitation at the University of Portsmouth, where I now hold a Marie Skłodowska-Curie individual fellowship and the ICG's Dennis Sciama fellowship (though unfortunately I do not get paid two salaries at once!).

It's partly a sign of how much I have been neglecting this blog in recent times that I've only just got around to posting about this now, nearly a month after arriving here. But it is also partly due to the fact that I am still waiting for a functioning internet connection in my new home, so any blog postings must be done while still at my desk!

Anyway, I'm very excited to be working here at the ICG, because it is one of the leading cosmology institutes in the UK, and therefore by extension in Europe and also the world. The institute directors mentioned several times during staff induction meetings the statistics for how much of the research output here was graded "world-leading" or "internationally excellent" in the recent UK REF review — but I forget the numbers. In any case what's more important is that the ICG is home to world experts in many of the fields that I work in, and — crucially — that it is a very large, exciting and young department, with around 60 members, of whom 20 or so (20!) are young postdoctoral researchers, and another 20 are PhD students.

Since I like including pictures with my posts, let me put some up of the famous names associated with my fellowships. Here is Marie Curie, from her 1903 Nobel Prize portrait:

"Marie Curie 1903" by the Nobel foundation. (Public domain)
and here is Dennis Sciama:


Marie Curie is of course justly famous around the world and I'm sure everyone reading this blog is aware of her fantastic achievements — two Nobel prizes, in two different sciences, first woman to win a Nobel prize, discoverer of two elements, the theory of radioactivity, and so on.

Dennis Sciama is perhaps not quite so well known to those outside cosmology, but my what a towering figure he is within the field, and in the history of British science. Even the list of his PhD students reads like a who's-who of modern cosmology: Stephen Hawking, Martin Rees, George Ellis, Gary Gibbons, John Barrow, James Binney ...

The ICG in particular seems to have rather a fondness for Sciama — in addition to the fellowship I've already mentioned, we work in the Dennis Sciama building, and many of us make use of the SCIAMA supercomputer. I was a little puzzled by this, because although Sciama moved from Cambridge to Oxford to Trieste during his career, I wasn't aware of any special link to Portsmouth.

In fact the answer appears to be that a large proportion of the staff at the ICG happened to be (academically speaking) his grandchildren, having received the PhDs under the supervision of George Ellis or John Barrow. That, and the fact that it is always nice to name new buildings after really famous people!

Anyway, this will hopefully mark the start of a rather more regular series of posts about cosmology here — for one thing, my Marie Curie proposal included a proposal included a commitment to write short explanations of each new paper I produce over the next two years!

PS: A small factoid that caught my attention about Dennis Sciama is that although born in Manchester, his family was actually of Syrian Jewish origin. They originally came from Aleppo, in fact, though his mother was born in Egypt. In light of recent events it seems worth pondering on that. 

Thursday, September 17, 2015

10 tips for making postdoc applications (Part 2)

This post is part 2 of a series with unsolicited advice for postdoc applicants. Part 1, which includes a description of the motivation behind the posts and tips 1 through 5, can be found here.

----

6. Promote yourself


This sounds sort of obvious, but for cultural reasons may come easier to some people than to others. I don't mean to suggest you should be boastful or oversell yourself in your CV and research statement. But be aware that people reading through hundreds of applications will not have time to read between the lines to discover your unstated accomplishments — so present the information that supports your application as clearly and as matter-of-factly as possible.

As a very junior member of the academic hierarchy, it is quite likely that nobody in the hiring department has heard of you or read any of your papers, no matter how good they were. They are far more likely to recognise your name if you happen to have taken some steps make yourself known to them, for instance by having arranged a research visit, given a talk at their local journal club or seminar series, initiated a collaboration on topics of mutual interest, or simply introduced yourself and your research at some recent conference.

Organisers of seminar series and journal clubs are generally more than happy to have volunteers help fill some speaking slots — put anxieties to one side and just email them to ask! And if they do have time for you, make sure you give a good talk.

7. Know what type of postdoc you're applying for


There are, roughly speaking, three different categories of postdoc positions in high-energy and astro (and probably more generally in all fields of physics, if not in all sciences).

The first category is fellowships. These are positions which provide funding to the successful candidate to pursue a largely independent line of research. They therefore require you to propose a detailed and interesting research plan. They may sometimes be tied to a particular institution, but often they provide an external pot of money that you will be bringing to the department you go to. They are also highly prestigious. Examples applicable to a cosmology context are Hubble and Einstein fellowships in the US, CITA national fellowships in Canada, Royal Society and Royal Astronomical Society fellowships in the UK, Humboldt fellowships in Germany, Marie Skłodowska-Curie fellowships across Europe, and many others.

The second category is roughly a research assistant type of position. Here you are hired by some senior person who has won a grant for their research proposal, or has some other source of funding out of which to pay your salary. You are expected to work on their research project, in a pretty closely defined role.

The third category is a sort of mix of the above two, which I'll call a semi-independent postdoc. This is where the postdoc funding comes from someone's grant, but they do not specify a particular research programme at the outset, giving you a large degree of independence in what work you actually want to do.

Image credit: Jorge Cham.

When you apply, it is imperative that you know which of these three types of positions the people hiring you have in mind. There is no point trying to sell a detailed independent research plan — no matter how exciting — to someone who is only interested in whether you have the specific skills and experience to do what they tell you to. Equally, if what they want is evidence that you will drive research in your own directions, an application that lists your technical skills but doesn't present a coherent plan of what you will do with them is no good.

Unfortunately postdoc ads don't use these terms, so it is generally not clear whether the position is of the second or third type. If in doubt, contact the department and find out what they want.

Also, even when the distinction may be clear, many people still produce the same type of application for all jobs they apply for in one cycle. I know of an instance of a highly successful young scientist who managed to win not one but two prestigious individual fellowship grants worth hundreds of thousands of Euros each, and yet did not get a single offer from any of non-fellowship positions they applied to! So tailor your applications to the situation.

8. Apply for fellowships

Applications for the more prestigious fellowships require more work — a lot more work — than other postdoc applications. They will require you to produce a proper research proposal, which will need to include a clear and inspiring outline — of anything between one and twenty pages length — of what you intend to do with the fellowship. This will take a long time to think of and longer to write. They will probably require you to work on the application in collaboration with your host department, and they may have a million other specifically-sized hoops for you to jump through.

Nevertheless, you should make a serious attempt to apply for them, for the following reasons.

  • Any kind of successful academic career requires you to write lots of such proposals, so you might as well start practising now.
  • Writing a proposal forces you to prepare a serious plan of what research you want to do over the next few years, which will help clarify a lot of things in your mind, including how much you actually want to stay in the field (see point 2 again!). 
  • You will often write the proposal in collaboration with your potential host department. This makes it far more likely that they will think favourably of you should any other opportunities arise there later! For instance, I know of many cases where applicants for Marie Curie fellowships have ended up with positions in the department of their choice even though the fellowship application itself was ultimately unsuccessful.
  • Major fellowship programmes are more likely to have the resources and the procedures in place to thoroughly evaluate each proposal, reducing the unfortunate random element I'll talk about below. Many will provide individual feedback and assessments, which will help you if you reapply next year.
  • Counter-intuitively, success rates may be significantly higher for major fellowships than for standard postdoc jobs! Last year, the success rate for Hubble fellowships was 5%, for Einstein fellowships 6%, and for Marie Curie fellowships (over all fields of physics) almost 18%. All of these numbers compare quite well with those for standard postdoc positions! Granted, this is at least partly due to self-selection by applicants who don't think they can prepare a good enough proposal in the first place, but it is still something to bear in mind.
  • Obviously, a successful fellowship application counts for a lot more in advancing your career than a standard postdoc.

9. Recognize the randomness

Potential employers are faced with a very large number of applications for each postdoc vacancy; a ratio of 100:1 is not uncommon. Even with the best of intentions, it is just not possible to give each application equal careful consideration, so some basic pre-filtering is inevitable.

Unfortunately for you, each department will have its own criteria for pre-filtering, and you do not know what those criteria are. Some will filter on recommendation letters, some on number of publications, some on number of citations. (As a PhD student I was advised by a well-meaning faculty member at a leading UK university that although they found my research very interesting, I did not yet meet their cutoff of X citations for hiring postdocs.) Others may deduce your field of interest only from existing publications rather than your research statement — this is particularly hard on recent PhDs who may be trying to broaden their horizons beyond their advisor's influence.

Beyond this, it's doubtful that two people in different departments will have the same opinion of a given application anyway. They're only human, and their assessments will always be coloured by their own research interests, their plans for the future of the department, their different personal relationships with the writers of your recommendation letters, maybe even what they had for breakfast that morning.

You can't control any of this. Your job is to produce as good and complete an application as possible (remember to send everything they ask for!), to apply to lots of suitable places, and then to learn not to fret.

10. Don't tie your self-esteem to the outcome

You will get rejections. Many of them. Even worse, there will be many places who don't even bother to let you know you were rejected. You will sometimes get a rejection at the exact same time as someone else you know gets an offer, possibly for the same position. (Things are made worse if you read the postdoc rumour mills regularly.)

It's pretty hard to prevent these rejections from affecting you. It's all too easy to see them as a judgement of your scientific worth, or to develop a form of imposter syndrome. Don't do this! Read point 9 again. 

I'd also highly recommend reading this post by Renée Hlozek, which deals with many of the same issues. (Renée is one of the rising stars of cosmology, with a new faculty position after a very prestigious postdoc fellowship, but she too got multiple rejections the first time she applied. So it does happen to the best too, though people rarely tell you that.)

-------

That's it for my 10 tips on applying for (physics) postdocs. They were written primarily as advice I would have liked to have given my former self at the time I was completing my PhD.

There's plenty of other advice available elsewhere on the web, some of it good and some not so good. I personally felt that far too much of it concerned how to choose the best of multiple offers, which is both a bit pointless (if you've got so many offers you'll be fine either way) and really quite far removed from the experience of the vast majority of applicants. I hope some people find this a little more useful.


Sunday, September 13, 2015

10 tips for making postdoc applications (Part 1)

Around this time of year in the academic cycle, thousands of graduate students around the world will be starting to apply for a limited supply of short-term postdoctoral research positions, or 'postdocs'. They will not only be competing against each other, but also slightly more senior colleagues applying for their second or possibly third or fourth postdocs.

The lucky minority who are successful — and, as Richie Benaud once said about cricket captaincy, it is a matter of 90% luck and 10% skill, but don't try it without the 10% — will probably need to move their entire life and family to a new city, country or continent. The entire application cycle can last two or three months — or much longer for those who are not successful in the first round — and is by far the most stressful part of an early academic career.



What I'd like to do here is to provide some unsolicited advice on how best to approach the application process, which I hope will be of help to people starting out on it. This advice mostly consists of a collection of things that I wish people had told me when I was starting out myself, plus things that people did tell me, but that for whatever reason I didn't understand or appreciate.

My own application experience has been in the overlapping fields of cosmology, astrophysics and high-energy particle physics, and most of my advice is written with these fields in mind. Some points are likely to be more generally useful, but I don't promise anything!

I'm also not going to claim to know much about what types of things hiring professors or committees actually look for — in fact, I strongly suspect that there are very few useful generalizations that can be made which cover all types of jobs and departments. So I won't tell you what to wear for an interview, or what font to use in your CV. Instead I'll try to focus on things that might help make the application process a bit less stressful for you, the applicant, giving you a better chance of coming out the other side still happy, sane, and excited about science.

With that preamble out of the way, here are the first 5 of my tips for applying for postdocs! The next 5 follow in part 2 of this post.

1. Start early

At least in the high-energy and astro fields, the way the postdoc job market works means that for the vast majority of jobs starting in September or October of a given year, the application deadlines fall around September to October of the previous year. Sometimes — particularly for positions at European universities — the deadlines may be a month or two later. However, for most available positions job offers are made around Christmas or early in the new year, and the number of positions still advertised after about February is small to start with and decreases fast with each additional month.

This means if you want to start a postdoc in 2016, you should already have started preparing your application materials. If not, it's not too late, but start immediately!

Applying for research jobs is a very different type of activity to doing research, is not as interesting, requires learning a different set of skills, and therefore can be quite daunting. This makes it all too easy to procrastinate and put it off! In my first application cycle, I came up with a whole lot of excuses and didn't get around to seriously applying for anything until at least December, which is way too late.  

2. Consider other options

This sounds a bit harsh, but I think it is vital. My point is not that getting into academia is a bad career move, necessarily. But don't get into it out of inertia. I've met a few people who, far too many months into the application cycle, with their funding due to run out, and despite scores of rejections, continue the desperate search for a postdoc position somewhere, anywhere, simply because they cannot imagine what else they might do.

Don't be that person. There are lots of cool things you can do even if you don't get a postdoc. There are many other interesting and fulfilling careers out there, which will provide greater security, won't require constant upheaval, and will almost certainly pay better. Many of them still require the kinds of skills we've spent so many years learning — problem solving, tricky mathematics, cool bits of coding, data analysis — but most projects outside academia will be shorter and less nebulous, success will be more quantifiable and the benefits of success may well be more tangible.

If you have no idea what kinds of jobs you could do outside academia, find out now. Get in touch with previous graduate students from your department who went that way, find out what they are doing and how they got there. The AstroBetter website provides a great collection of career profiles which may provide inspiration. 

If after examining the alternatives you decide you'd still prefer that postdoc, great. But at least when you apply you won't be doing it purely out of inertia, and you'll have the reassurance that if you don't get it, there are other cool things you could do instead. And I'm pretty sure this will help your peace of mind during the weeks or months you spend re-drafting those research statements!

Image credit: Jorge Cham.

3. Apply everywhere

It's not uncommon in physics for some postdoc ads to attract 100 qualified applicants or more per available position, and the numbers of advertised positions isn't that large. So apply for as many as you can! It's not a great idea to decide where to apply based on 'extraneous' reasons — e.g., you only want to live in California or Finland or some such. 

Particularly if you're starting within Europe, you will probably have to move to a new country, and probably another new country after that. So if you have a strong aversion to moving countries, I'd suggest going back to point 2 above. 

On the other hand, you can and should take a more positive view: living somewhere new, learning a new language and discovering a new culture and cuisine can be tremendous fun! Even remote places you've never heard of, or places you think you might not like, can provide you with some of the best memories of your life. Just as small example, before I moved to Helsinki, my mental image of Finland was composed of endless dark, depressing winter. After two years here this image has been converted instead to one of summers of endless sunshine and beautiful days spent at the beach! (Disclaimer: of course Finland is also dark, cold and miserable sometimes. Especially November.)

So for every advertised position, unless you are absolutely 100% certain that you would rather quit academia than move there for a few years — don't think about it, just apply. For the others, think about it and then apply anyway. If you get offered the job you'll always be able to say no later.

4. Don't apply everywhere

However, life is short. Every day you spend drafting a statement telling people what great research you would do if they hired you is a day spent not doing research, or indeed anything else. If you apply for upwards of 50 different postdoc jobs (not an uncommon number!), all that time adds up.

So don't waste it. Read the job advertisement carefully, and assess your chances realistically. There's not much to be gained from applying to departments which are not a good academic fit for you.

When I first applied for postdocs several years ago, I would read an advert that said something like "members of the faculty in Department X have interests in, among other things, string theory, lattice QCD, high-temperature phase transitions, multiloop scattering amplitudes, collider phenomenology, BSM physics, and cosmology," and I'd focus on those two words "and cosmology". So despite knowing that "cosmology" is a very broad term that can mean different things to different people, and despite not being qualified to work on string theory, lattice QCD, high-temperature phase transitions etc., I'd send off my application talking about analysis of CMB data, galaxy redshift surveys and so on, optimistically reasoning that "they said they were interested in cosmology!" And then I'd never hear back from them.

Nowadays, my rule of thumb would be this: look through the list of faculty, and if it doesn't contain at least one or two people whose recent papers you have read carefully (not just skimmed the abstract!) because they intersected closely with your own work, don't bother applying. If you don't know them, they almost certainly won't know you. And if they don't know you or your work, your application probably won't even make it past the first round of sorting — faced with potentially hundreds of applicants, they won't even get around to reading your carefully crafted research statement or your glowing references.

Being selective in where you apply will save you a heap of time, allow you to produce better applications for the places which really do fit your profile, and most importantly leave you feeling a lot less jaded and disillusioned at the end of the process.

5. Choose your recommendations well


Almost all postdoc adverts ask for three letters of recommendation in addition to research plans and CVs. These letters will probably play a crucial part in the success of your application. Indeed for a lot of PhD students applying for their first postdoc, the decision to hire is based almost entirely on the recommendation letters - there's not much of an existing track record by this stage, after all.

So it's important to choose well when asking senior people to write these recommendation for you. As a graduate student, your thesis advisor has to be one of them. It helps if one of the others is from a different university to yours. If possible, all three should be people you have worked, or are working, closely with, e.g. coauthors. But if this is not possible, one of the three could also be a well-known person in the field who knows your work and can comment on its merit and significance in the literature.

Having said that, there are several other factors that go into choosing who to get recommendations from. Some professors are much better at supporting and promoting their students and postdocs in the job market than others. You'll notice these people at conferences and seminars: in their talks they will go out of their way to praise and give credit to the students who obtained the results they are presenting, whereas others might not bother. These people will likely write more helpful recommendations; they also generally provide excellent career advice, and may well help your application in other, less obvious, ways. They are the ideal mentors, and all other things being equal, their students typically fare much better at getting that first and all-important step on the postdoc ladder. Of course ideally your thesis advisor will be such a person, but if not, find someone in your department who is and ask them for help.

Somewhat unfortunately, I'm convinced that how well your referees themselves are known in the department to which you are applying is almost as important as how much they praise you. If neither you nor any of your referees have links — previous collaborations, research visits, invitations to give seminars — with members of the advertising department, I think the chances of your application receiving the fullest consideration are unfortunately much smaller. (I realise this is a cynical view and having never been on a hiring committee myself I have no more than anecdotal evidence in support of it. But I do see which postdocs get hired where.) So choose wisely.

It is also a good idea to talk frankly to your professors/advisor beforehand. Explain where you are planning to apply, what they are looking for, and what aspects of your research skills you would like their letters to emphasise. Get their advice, but also provide your own input. You don't want to end up with a research statement saying you're interested in working in field A, while your recommendations only talk about your contributions in field B.

---

That's it for part 1 of this lot of unsolicited advice. Part 2 is available here!

Sunday, April 26, 2015

Supervoid superhype, or the publicity problem in science

Part of the reason this blog has been quiet recently is that I decided at the start of this year to try to avoid — as far as possible — purely negative comments on incorrect, overhyped papers, and focus only on positive developments. (The other part of the reason is that I am working too hard on other things.)

Unfortunately, last week a cosmology story hit the headlines that is so blatantly incorrect and yet so unashamedly marketed to the press that I'm afraid I am going to have to change that stance. This is the story that a team of astronomers led by Istvan Szapudi of the University of Hawaii have found "the largest structure in the Universe", which is a "huge hole" or "supervoid" that "solves the cosmic mystery" of the CMB Cold Spot. This story was covered by all the major UK daily news outlets last week, from the Guardian to the Daily Mail to the BBC, and has been reproduced in various forms in all sorts of science blogs around the world. 

There are only three things in these headlines that I disagree with: that this thing is a "structure", that it is the largest in the Universe, and that it solves the Cold Spot mystery.

Let's focus on the last of these. Readers of this blog may remember that I wrote about the Cold Spot mystery in August last year, referring to a paper my collaborators and I had written which conclusively showed that this very same supervoid could not explain the mystery. Our paper was published back in November in Phys. Rev. D (journal link, arXiv link). And yet here we are six months later, with the same claims being repeated!

Does the paper by Szapudi et al. (journal link, arXiv link) refute the analysis in our paper? Does it even acknowledge the results in our paper? No, it pretends this analysis does not exist and makes the claims anyway.

Just to be clear, it's possible that Szapudi's team are unaware of our paper and the fact that it directly challenged their conclusions several months before their own paper was published, even though Phys. Rev. D is a very high profile journal. This is sad and would reflect a serious failure on their part and that of the referees. The only alternative explanation would be that they were aware of it but chose not to even acknowledge it, let alone attempt to address the argument within it. This would be so ethically inexcusable that I am sure it cannot be correct.

I am also frankly amazed at the standard of refereeing which I'm afraid reflects extremely poorly on the journal, MNRAS.

Coming to the details. In our paper last year, we made the following points:
  1. Unless our understanding of general relativity in general, and the $\Lambda$CDM cosmological model in particular, is completely wrong, this particular supervoid, which is large but only has at most 20% less matter than average, is completely incapable of explaining the temperature profile of the Cold Spot.
  2. Unless our understanding is completely wrong as above, the kind of supervoid that could begin to explain the Cold Spot is incredibly unlikely to exist — the chances are about 1:1,000,000!
  3. The corresponding chances that the Cold Spot is simply a random fluctuation that requires no special explanation are at worst 1:1000, and, depending on how you analyse the question, probably a lot better.
  4. This particular supervoid is big and rare, but not extremely so. In fact several voids that are as big or bigger, and as much as 4 times emptier, have already been seen elsewhere in the sky, and theory and simulation both suggest there could be as many as 20 of them.
To illustrate point 1 graphically, I made the following figure showing the actual averaged temperature profile of the Cold Spot versus the prediction from this supervoid:

Image made by me.

If this counts as a "solution to a cosmic mystery" then I'm Stephen Hawking.

The supervoid can only account for less than 10% of the total temperature decrement at the centre of the Cold Spot (angle of $0^\circ$). At other angles it does worse, failing to even predict the correct sign! And remember, this prediction only assumes that our current understanding of cosmology is not completely, drastically wrong in some way that has somehow escaped our attention until now.

You'll also notice that if the entire red line is somehow magically (through hypothetical "modified gravity effects") scaled down to match the blue line at the centre, it remains wildly, wildly wrong at every other angle. This is a direct consequence of the fact that the supervoid is very large, but really not very empty at all.

By contrast, the simple fact that the Cold Spot is chosen to be the coldest spot in the entire CMB already accounts for 100% of the cold temperature at the centre:

The red line is the observed Cold Spot temperature profile. 95% 68% of the coldest spots chosen in random CMB maps have temperatures lying within the dark blue band, and 99% 95% lie within the light blue band. Image credit: http://arxiv.org/abs/1408.4720.

Similarly, the fact that Mt. Everest is much higher than sea level is not at all surprising. The highest mountains on other planets (Mars, for instance) can be a lot higher still.

But how to explain the fact that a large void does appear to lie in the same direction as the Cold Spot? Is this not a huge coincidence that should be telling us something?

Let's try the following calculation. Take the hypothesis that this particular void is causing the Cold Spot, let's call it hypothesis H1. Denote the probability that this void exists by $p_\mathrm{void}$, and the probability that all of GR is wrong and that some unknown physics leads to a causal relationship as $p_\mathrm{noGR}$. Then
$$p_\mathrm{H1}=p_\mathrm{void}p_\mathrm{noGR}$$On the other hand, let H2 be the hypothesis that the void and the Cold Spot are separate rare occurrences that happen by chance to be aligned on the sky. This gives
$$p_\mathrm{H2}=p_\mathrm{void}p_\mathrm{CS}p_\mathrm{align},$$where $p_\mathrm{CS}$ is the probability that the Cold Spot is a random fluctuation on the last scattering surface, and $p_\mathrm{align}$ the probability that two are aligned.

The relative likelihood of the two rival hypotheses is given by the ratio of the probabilities:
$$\frac{p_\mathrm{H1}}{p_\mathrm{H2}}=\frac{p_\mathrm{noGR}}{p_\mathrm{CS}p_\mathrm{align}}.$$Suppose we assume that $p_\mathrm{CS}=0.05$, and that the chance of alignment at random is $p_\mathrm{align}=0.001$.[1] Then the likelihood we should assign to "supervoid-caused-the-Cold-Spot" hypothesis depends on whether we think $p_\mathrm{noGR}$ is more or less than 1 in 20,000.

This exact calculation appears in Szapudi et al's paper, except that they mysteriously leave out the numerator on the right hand side. This means that they assume, with probability 1, that general relativity is wrong and that some unknown cause exists which makes a void with only a 20% deficit of matter create a massive temperature effect. In other words, they've effectively assumed their conclusion in advance.

Well, call me old-fashioned, but I don't think that makes any sense. We have a vast abundance of evidence, gathered over the last 100 years, which show that if indeed GR is not the correct theory of gravity it is still pretty damn close to it. What's more, we have lots of cosmological evidence — from the Planck CMB data, from cross-correlation measurements of the ISW effect, as well as from weak lensing — that gravity behaves very much as we think it does on cosmological scales. Looking at the figure above, for the supervoid to explain the Cold Spot requires at least a factor of 10 increase in the ISW effect at the void centre, as well as a dramatic effect on the shape of the temperature profile. And all this for a void with only a 20% deficit of matter! If the ISW effect truly behaved like this we would have seen evidence of it in other data.

For my money, I would put $p_\mathrm{noGR}$ at no higher than $2.9\times10^{-7}$, i.e. I would rule out the possibility at $5\sigma$ confidence. This is a lot less than 1:20,000, so I would say chance alignment is strongly favoured. Of course you should feel free to put your own weight on the validity of all of the foundations of modern cosmology, but I suggest you would be very foolish indeed to think, as Szapudi et al. seem to do, that it is absolutely certain that these foundations are wrong.

So much for the science, such as it is. The sociological observation that this episode brings me back to is that, almost without exception, whenever a paper on astronomy or cosmology is accompanied by a big press release, either the science is flawed, or the claims in the press release bear no relation to the contents of the paper. This is a particularly blatant example, where the authors have generated a big splash by ignoring (or being unaware of) existing scientific literature that runs contrary to their argument. But the phenomenon is much more ubiquitous than this.

I find this deeply depressing. Like most other young researchers (I hope), I entered science with the naive impression that what counted in this business was the accuracy and quality of research, the presentation of evidence, and — in short — facts. I thought the scientific method would ensure that papers would be rigorously peer-reviewed. I did not expect that how seriously different results are taken would instead depend on the seniority of the lead author and the slickness of their PR machine. Do we now need to hold press conferences every time we publish a paper just to get our colleagues to cite our work?

One possible response to this is that I was hopelessly naive, so more fool me. Another, which I still hope is closer to the truth, is that, in the long run, the crap gets weeded out and that truth eventually prevails. But in an era when public "impact" of scientific research is an important criterion for career advancement, and such impact can be simply achieved by getting the media to hype up nonsense papers [2], I am sadly rather more skeptical of the integrity of scientists [3].
----

[1] This probability for alignment is the number quoted by Szapudi's team, based on the assumption that there is only one such supervoid, which could be anywhere in the sky. In fact, as I've already said, theory and simulation suggest there should be as many as 20 supervoids, and several have already been seen elsewhere in the sky (including one other by Szapudi's team themselves!). The probability that any one supervoid should be aligned with the Cold Spot should therefore be roughly 20 times larger, or 0.02.

[2] Not everything in Szapudi's paper is nonsense, of course. For instance, it seems quite likely that there is indeed a large underdensity where they say. But there is still a deal of nonsense (described above) in the actual paper, and vastly more in the press releases, especially the one from the Institute for Astronomy in Hawaii. 

[3] On the whole, given the circumstances I though journalists handled the hype quite well, especially Hannah Devlin in the Guardian, who included a skeptical take from Carlos Frenk. (I suspect Carlos at least was aware of our paper!)

Tuesday, December 23, 2014

Planck's starry sky

Well December 22nd has come and gone, and the promised release of Planck data has, perhaps unsurprisingly, not materialised. Some of the talks presented at the Ferrara conference are available here, and there was a second conference in Paris more recently, video recordings from which can be found here.

I've seen a bit of speculation about the delays and what the data might or might not be showing on a few physics blogs, some of which I think are a little mistaken. So I thought I'd put up a quick post summarising the situation as I see it — but note that my opinion is not at all official and may be wrong on some of the details (especially since I wasn't at either of the conferences).

For a start, you may notice that some of the talks at Ferrara are not made available on the website. I'm informed that this internal censorship was applied by the Planck science team, and it is based on their estimation that the censored talks are ones containing results which are still preliminary and liable to change before the eventual data release. The flip side of this is that the talks that are available contain data which they are confident will not change, so these are the ones you'd want to pay attention to in any case.

In terms of the data itself, there appear to be two and a half important improvements so far. The first is that the overall calibration of the temperature power spectrum — which was previously somewhat discrepant with the WMAP measurement — has been improved, and now Planck and WMAP agree very well with each other. The second is that the apparent anomaly in the temperature power spectrum at multipole values of $\ell\sim1800$ has been identified as being due to a glitch in the 217 GHz data, and has been corrected. The anomaly has therefore disappeared. This can be seen by comparing the 2013 and 2014 versions of the TT power (if you look carefully):



The remaining half an improvement comes from the polarization data. Previously, this was so badly affected by systematics at large scales that the Planck team were only able to even show the data points at $\ell>100$, and were unable to use them for any science analysis, relying instead on the WMAP polarization. These systematics have still not been completely resolved — apparently it is the HFI instrument which is the problematic one — but they have been somewhat improved, such that the EE and TE power spectra are trustworthy at $\ell>30$, which is enough to start using them for parameter constraints in place of the WMAP data. (This means that the error bars on various derived parameter values have decreased a little from 2013, but they will decrease a lot more when all the data is finally available.)

This last half improvement is somewhat relevant to the BICEP2 issue which I discussed here, since the improved polarization data in 2014 was an important reason that Planck was able to say something about the dust polarization in the BICEP2 window. The fact that they still aren't 100% happy with this data yet could be a bit concerning. On the other hand, the relevant range of multipoles for BICEP2 is $\ell\sim80$ rather than $\ell<30$.

In terms of what these new data tell us, I'm afraid the story appears mostly rather boring, since there is very little change from what we learned already in 2013. As expected, the values of all cosmological parameters are consistent with what Planck announced in 2013; insofar as there have been any minor changes in the values, they tend to move in the direction of making Planck and WMAP more consistent with each other, but really these shifts appear very small and not worth worrying about. It appears the only parameter which has shifted at all significantly is the optical depth $\tau$. Constraints on $\tau$ rely on the use of polarization data; the previous constraints were obtained by combining Planck temperature with WMAP polarization measurements whereas the current value comes from Planck alone.

At some point I suppose the various systematics with the HFI polarization data will be sorted out to the extent that we will get the long-awaited release and the papers. But I have given up trying to predict when. In the meantime, I thought the coolest thing to come out of the recent conferences was this image:


which rather reminded me of this:

Detail from The Starry Night.
and this:

Detail from Haystacks Near a Farm in Provence. 

Monday, December 1, 2014

Planck at Ferrara

There is a conference starting today in Ferrara on the final results from Planck.

Though actually these won't be the final results from Planck, since although all scientists in the Planck team have been scrambling like mad to prepare for this date, they haven't been able to get all their results ready for presentation yet. So the actual release of most of the data and the scientific papers is scheduled for later this month. December 22nd, in fact — for European scientists, almost the last working day of the year (Americans tend to have some conferences between Christmas and New Year) — so at least we will technically have the results in 2014.

Except even that isn't really it, because the actual Planck likelihood code will only be released in January 2015. Or at least, I'm pretty sure that's what the Planck website used to say: now it doesn't mention the likelihood code by name, referring instead to "a few of the derived products."

If you're confused, well, so am I. The likelihood code is one of the most important Planck products for anyone planning to actually use Planck data for their own research — to do so properly normally means re-running fits to the data for your favourite model, which means you need the likelihood code. (Of course, some people do take the short cut of simply quoting Planck constraints on parameters derived in other contexts, and this is not always wrong.) This means that having the final, correct version likelihood code is rather important even for Planck scientists themselves to be completely confident in the results they are presenting. So it would make more sense to me if the likelihood code were released at the same time as the rest of the data. Perhaps that is what is actually going to happen, I suppose we'll find out soon.

Incidentally, my information is that the "final, correct" version of the likelihood code was distributed for internal use within the Planck collaboration about 4 weeks ago or so. Considering that it is only after this happens that proper model comparison projects can begin, that obtaining parameter constraints for each model can take a surprisingly large amount of computing time, that the various Planck teams responsible for this step had scores of different models to investigate, that the "final, correct" version may well have undergone a subsequent revision, and that the process of drafting each paper at the end of the analysis must itself take a couple of weeks minimum ... I suppose I'm not very surprised that the date for data release has been pushed back.

There's some uncertainty about whether the videos from the conference will be made available, as a statement on the website saying this would happen has been removed. For those interested here is a Youtube channel purporting to provide video from the conference, but disappointingly it doesn't appear to actually work. 

Wednesday, November 26, 2014

Quasar structures: a postscript

A few days ago I discussed the purported 'spooky' alignment of quasar spins and the cosmological principle here. So as to focus better on the main point, I left a few technical comments out of that discussion which I want to mention here. These don't have any direct bearing on the main argument made in that post — rather they are interesting asides for a more expert audience.

Quasars can't prove the Universe is homogeneous

Readers of the original post might have noticed that I was quite careful to always state that the distribution of quasars was statistically homogeneous, but not that the quasars showed the Universe was homogeneous. The reason for this lies in the properties of the quasar sample itself.

There are two main ways of constructing a sample of galaxies or quasars to use for further analysis, such as testing homogeneity. The first is that you simply include every object seen by the survey instruments within a certain patch of sky that lies between two redshifts of interest. But these objects will vary in their intrinsic brightness, and the survey instruments have a limited sensitivity, so can only record dim objects when they are relatively close to us. Intrinsically bright objects are rarer, but if they are very far away we will only be able to see the rare bright ones. So this strategy results in a sample with very many, but largely dim, galaxies or quasars relatively close to us, and fewer but brighter objects far away. This is known as a flux-limited sample.

The other strategy is to correct the measured brightness of each object for the distance from us, to determine its 'intrinsic' brightness (otherwise known as its absolute magnitude), and then select a sample of only those objects which have similar absolute magnitudes. The magnitude range is chosen in accordance with the range of distances such that within the volume of the Universe surveyed, we can be confident we have seen every object of that magnitude that exists. This is called a volume-limited sample.

Testing the homogeneity of the Universe requires a volume-limited survey of objects. For a flux-limited sample the distribution in redshift (i.e., in the line-of-sight direction) would not be expected to be uniform in the first place: the number density of objects would ordinarily decrease sharply with redshift. But looking out away from Earth also involves looking back in time; so if the redshift range of the survey is large, the farthest objects are seen as they were at an earlier time than the closest ones. If the objects in question had evolved significantly in that time, near and far objects could represent significantly different populations even in a volume-limited sample, and once again we wouldn't expect to see homogeneity along the line of sight, even if the Universe were homogeneous.

So to really test the cosmological principle without having to assume homogeneity at the outset,1 we really need a volume-limited sample of galaxies that cover a very large volume of the Universe but span a relatively narrow range of redshifts. Such surveys are hard to come by. For example, the study confirming homogeneity in WiggleZ galaxies (see here and here) actually used a flux-limited sample, so required additional assumptions. In this case one doesn't obtain a proof, rather a check of the self-consistency of those assumptions — which people may regard as good enough, depending on taste.

Anyway, the key point is that the DR7QSO quasar sample everyone uses is most definitely flux-limited and not volume-limited (I was myself reminded of this point by Francesco Sylos Labini). Despite this, the redshift distribution of quasars is remarkably uniform (between redshifts 1 and 1.8). So what's going on? Well, unlike certain types of galaxies that live much closer to home, distant quasar populations are expected to evolve rather quickly with time. And the age difference between objects at redshifts 1 and 1.8 is more than 2 billion years!

It would appear that this effect and the flux-limited nature of the survey coincidentally roughly cancel each other out for the sample in question. A volume-limited subset of these quasars would be (is) highly inhomogeneous — but then because of the time evolution the homogeneity or otherwise of any sample of quasars says nothing much about the homogeneity or otherwise of the Universe in general.

Luckily this is only incidental to the main argument. The fact that the distribution of these (flux-limited) quasars is statistically homogeneous on scales of 100-odd Megaparsecs despite claims for the existence of Gigaparsec-scale 'structures' simply demonstrates the point that the existence of single structures of any kind doesn't have any bearing on the question of overall homogeneity. Which is the main point.

Homogeneity is sample-dependent 

Of course the argument above cuts both ways.

Let's imagine that a study has shown that the distribution of a particular type of galaxy — call them luminous red galaxies — approaches homogeneity above a certain distance scale, say 100 Megaparsecs. Such a study was done by David Hogg and others in 2005. From this we may reasonably conclude (though not, strictly speaking, prove) that the matter distribution in the Universe is homogeneous above at most 100 Mpc. But we are not allowed to conclude that the distribution of some other sample of objects — radio galaxies, quasars, blue galaxies etc. — approaches homogeneity above the same scale, or indeed at all!

Even in a Universe with a homogeneous matter distribution, the scale above which a volume-limited sample of galaxies whose properties are constant with time approaches homogeneity depends on the galaxy bias. This number depends on the type of galaxies in question, and so too to a lesser extent will the expected homogeneity scale. Of course if the sample is not volume-limited, or does evolve with time, all bets are off anyway.

More generally, for each sample of galaxies that we wish to use for higher order statistical measurements, the statistical homogeneity of that particular sample must in general be demonstrated first. This is because higher order statistical quantities, such as the correlation function, are conventionally normalized in units of the sample mean, but in the absence of statistical homogeneity this becomes meaningless.

There was a time when the homogeneity of the Universe was less well accepted than it is today, and the possibility of a fractal distribution of matter was still an open question. At that time demonstrating the approach to homogeneity on large scales in a well-chosen sample of galaxies was worth a publication (even a well-cited publication) in itself. This is probably no longer the case, but it remains a necessary sanity check to perform for each galaxy survey.

1Properly speaking, even the creation of a volume-limited sample requires an assumption of homogeneity at the outset, since the determination of absolute magnitudes requires a cosmological model, and the cosmological model used will assume homogeneity. In this sense all "tests" of homogeneity are really consistency checks of our assumption thereof.

Sunday, November 23, 2014

A 'spooky alignment' of quasars, or just hype?

In the news this week we've had a story on the alignment of quasar spins with large-scale structure, based on this paper by Hutsemekers et al. The paper was accompanied by this press release from the European Space Observatory, which was then reproduced in various forms in a number of blogs and news outlets — almost all of which stress the 'spooky' or 'mysterious' nature of the claimed alignment 'over billions of light years'.

At least one of these blogs (the one at The Daily Galaxy) explicitly claims that the alignment of these quasar spins is a challenge for the cosmological principle, which is the assumption of large-scale statistical homogeneity and isotropy of the Universe, on which all of modern cosmology is based. This claim is not contained in the press release, but originates from a statement in the paper itself, where the authors say
The existence of correlations in quasar axes over such extreme scales would constitute a serious anomaly for the cosmological principle.
I'm afraid that this claim is completely unsupported by any of the actual results contained within the paper, and is therefore one of those annoying examples of scientific hype. In this post I will try to explain why.

I have actually covered much of this ground before — in a blog post here, but more importantly in a paper published in Monthly Notices last year — and I must admit I am a little surprised at having to repeat these points (especially since my paper is cited by Hutsemekers et al.). Nevertheless, in what follows I shall try not to sound too grumpy.

The immediate story started with a paper by Roger Clowes and collaborators, who claimed to have detected the 'largest structure' in the Universe (dubbed the 'Huge-LQG') in the distribution of quasars, and also claimed that this structure violated the cosmological principle. My paper last year was a response to this, and made the following points:

  1. the detection of a single large structure has essentially no relevance to the question of whether the Universe is statistically homogeneous and isotropic;
  2. the quasar sample within which the Huge-LQG was identified is statistically homogeneous, and approaches homogeneity at the scale we expect theoretically, thus providing an explicit demonstration of point 1;
  3. the definition of 'structure' by which the Huge-LQG counts as a structure is so loose that by using it we would find equally vast 'structures' even in completely random distributions of points which (by construction!) contain no correlations and therefore no structure whatsoever; and 
  4. therefore the classification of the Huge-LQG set of quasars as a 'structure' is essentially empty of meaning.

Quasar structures don't violate homogeneity

Since I am already repeating myself, let me elaborate a little more on points 1 and 2. Our Universe is not exactly homogeneous. The fact that you exist — more generally, the fact that stars, galaxies and clusters of galaxies exist — is sufficient proof of this, so it would a very poor advertisement for cosmology indeed if it were all founded on the assumption of exact homogeneity. Luckily it isn't. In fact our theories could be said to predict the existence of structure in the potential $\Phi$ on all scales (that's what a scale-invariant power spectrum from inflation means!), and even the galaxy-galaxy correlation function only goes asymptotically to zero at large scales.

Instead we have the assumption of statistical homogeneity and isotropy, which means that we assume that when looked at on large enough scales, different regions of the Universe are on average the same. Clearly, since this is a statement about averages, it can only be tested statistically by looking at large numbers of different regions, not by finding one particular example of a 'structure'. In fact there is a well-established procedure for checking the statistical homogeneity of the distribution of a set of points (the positions of galaxies or quasars, in this case), which involves measuring its fractal dimension and checking the scale above which this is equal to 3. I've described the procedure before, here and here, and Peter Coles describes a bit of the history of it here.

The bottom line is that, as I showed last year, the quasar distribution in question is statistically homogeneous above scales of at most $\sim130h^{-1}$Mpc. There is therefore no 'structure' you can find in this data which could violate the cosmological principle. End of story.

Scaled number counts in spheres as a measure of the fractal dimension of the quasar distribution. On scales where this number approaches 1, the distribution is statistically homogeneous. From arXiv:1306.1700.

Structures and probability

Of course, there are many different ways of being statistically homogeneous. It is perfectly possible that within a statistically homogeneous distribution one could find a particular structure or feature whose existence in our specific cosmological model (which is one of many possible models satisfying the cosmological principle) is either very unlikely or impossible. This would then be a problem for that cosmological model despite not having any wider implications for the cosmological principle. But to prove this requires some serious analysis, which should include a proper treatment of probabilities — you can't just say "this structure is big, so it must be anomalous."

In particular, any serious analysis of probabilities must take into account how a 'structure' is defined. Given infinitely many possible choices of definition, and a very large Universe in which to search, the probability of finding some 'structure' that extends over billions of light years is practically unity. In fact the definition used for the Huge-LQG would be likely to throw up equally vast 'structures' even if quasar positions were not at all correlated with each other (and we know they must be at least somewhat correlated, because of gravity). So it really isn't a very useful definition at all.

'Spooky' alignments

This brings us to the current paper by Hutsemekers et al. The starting assumption of this paper is that the Huge-LQG is a real structure which is somehow distinguished from its surroundings. This assumption is manifest in the decision that the authors make to try to measure the polarization of light from only those quasars that are classified as part of the Huge-LQG rather than a more general sample of quasars. This classic case of circular reasoning is the first flaw in the logic, but let's put it to one side for a minute.

The press release then tells us that the scientists
found that the rotation axes of the central supermassive black holes in a sample of quasars are parallel to each other over distances of billions of light years
and that the spins of the central black holes are aligned along the filaments of large-scale structure in which they reside.

I find this statement extremely problematic. Here is a figure from the paper in question, showing the sky positions of the 93 quasars in question, along with the polarization orientations for the 19 which are used in the actual analysis:

Quasar positions (black dots) and polarization alignments (red lines). From arXiv.1409.6098.

Do you see the alignment? No, me neither. In fact, looking at the distribution of angles in panel b, I would say that looks very much like a sample drawn from a perfectly uniform distribution.

So what is the claim actually based on? Well, for a start one has to split up the (arbitrarily defined) 'structure' into several (even more arbitrarily defined) 'sub-structures'. Each of these sub-structures then defines a different reference angle on the sky:

Chopping the data to suit the argument (Figure 4 of arXiv:1409.6098). On what basis are sub-structures 1 and 2 defined as separate from each other?

And now one has to measure the angles between the quasar polarization direction and the reference direction of the particular sub-structure, and the direction perpendicular to the reference direction, and choose the smaller of the two. In other words, rather than prove that quasars are aligned parallel to each other over distances extending over billions of light years (the claim in the press release), what Hutsemekers et al. are actually doing is attempting to show that given arbitrary choices of some smaller sub-structures and reference directions, quasars in different sub-structures are typically aligned either parallel to or perpendicular to this direction. This is a much less exacting standard.

Even this claim is not particularly well supported by the evidence. That is, looking at the distribution of angles, I am really not at all convinced that this shows evidence for a bimodal distribution with peaks at 0 and 90 degrees:

Distribution of angles purportedly showing two distinct peaks at 0 and 90. Figure 5 of arXiv:1409.6098.

So in summary I think the statistical evidence of alignment of quasar spins is already pretty weak. I don't see any analysis in the paper dealing with the effects of a different arbitrary choice of sub-structures, nor do I see any error analysis (the error in measuring the polarization direction of a quasar can be as large as 10 degrees!). And I haven't even dealt with the fact that the polarization data is used for only 19 quasars out of the full 93 — in other words, for the majority of quasars in the sample the central black hole spins are aligned along some other, undetermined, direction such that we can't measure the polarization.

Extraordinary claims require extraordinary evidence

Now, it's worth repeating that we've already seen that in fact the space distribution of quasars is statistically homogeneous in accordance with the cosmological principle. That simple test has been done, the cosmological principle survives. So if you've got some more nuanced claim of an anomaly, I think the onus is on you not only to describe the measurement you made, but also say what exactly is anomalous about it. What is the theoretical prediction we should compare it to? Which model is being rejected (or otherwise) by the new data?

So, for instance, if quasar spins in sub-structures are indeed aligned either parallel or perpendicular to each other (and I still remain to be convinced that they are), is this really something 'spooky', or would we expect some degree of alignment in the standard $\Lambda$CDM model?

Such an analysis has not been presented, but even if it had, it's worth bearing in mind the principle that extraordinary claims require extraordinary evidence. I'm afraid throwing out a p-value of about 1% simply doesn't cut it. Not only is that actually not an enormously impressive number (especially given all the other things I mentioned above), such a frequentist statistic doesn't take account of all our prior knowledge.

Other people have banged this drum at length before, but the point is easily summarized: the p-value tells us the probability of getting this data given the model, but doesn't tell us the probability of the model being correct despite the new data appearing to contradict it. This is the question we really wish to answer. To do this requires a Bayesian analysis, in which one must account for the prior belief in the model, which is the result of confidence built up from all other experimental results that agree with it. We have an incredible amount of observational evidence in favour of our current model, that would probably not be consistent with a model in which gigantic structures could exist (I say 'probably' because no such model actually exists at present). 

So my prior in favour of $\Lambda$CDM is pretty high — 19 quasars and an analysis so full of holes are not going to change that so quickly.

Monday, September 22, 2014

Biting the dust

Sorry about the obvious pun in the title. Today's important announcement is of course the long-awaited Planck verdict on the level at which the BICEP2 "discovery" of primordial gravitational waves had been contaminated by foreground dust. That verdict does not look good for BICEP.

(Incidentally, back in July I reported a Planck source as saying this paper would be ready in "two or three weeks". Clearly that was far too optimistic. But interestingly many members of the Planck team themselves were confidently expecting today's paper to appear about 10 days ago, and the rumour is that the current version has been "toned down" a little, perhaps accounting for some of the additional delay. Despite that it's still pretty devastating.)

Let me attempt to summarize the new results. Some important points are made right in the abstract, where we read:
"... even in the faintest dust-emitting regions there are no "clean" windows in the sky where primordial CMB B-mode polarization measurements could be made without subtraction of foreground emission"
and that
"This level [of the dust power in the BICEP2 window, over the multipole range of the primordial recombination bump] is the same magnitude as reported by BICEP2 ..."
(my emphasis). Although
"the present uncertainties are large and will be reduced through an ongoing, joint analysis of the Planck and BICEP2 data sets,"
from where I am looking unfortunately it now does not look as if there is a realistic chance that what BICEP2 reported was anything more than a very precise measurement of dust.

The Planck paper is pretty thorough, and actually quite interesting in its own right. They make use of the fact that Planck observes the sky at many frequencies to study the properties of dust-induced polarization. Whereas BICEP2 was limited to a single frequency channel at 150 GHz, the Planck HFI instrument has 4 different frequencies, of which the most useful is at 353 GHz. Previous Planck results have already shown that dust emission behaves sort of like a (modified) blackbody spectrum at a temperature of 19.6 Kelvin. Since this is a significantly higher temperature than the CMB temperature of 2.73 K, dust emission dominates at higher frequencies, which means that the 353 GHz channel essentially sees only dust and nothing else. Which makes it perfect for the task at hand, since in this particular situation roles are reversed and it is the dust that is the signal and the primordial CMB is noise!

The analysis proceeds in a number of steps. First, they study the power spectra of the two polarization modes (EE and BB) in several different large regions in the sky:

The different large sky regions studied are shown as increments of red, orange, yellow, green and two different shades of blue. The darkest blue region is always excluded. Figure from arXiv:1409.5738.
In all these different regions, both power spectra $C_\ell^{EE}$ and $C_\ell^{BB}$ are proportional to $\ell^{\alpha}$, consistent with a value of $\alpha=-2.42\pm0.02$. Fixing $\alpha$ to this value, the amplitude of the power spectra in the different large regions then shows a characteristic dependence on the mean intensity of the dust emission — i.e. regions with more dust overall also show more polarization power — and this purely empirical relationship is characterized by
$$A^{EE,BB}\propto\langle I_{353}\rangle^{1.9},$$though with a bit of uncertainty in the fit. The amplitudes of the polarization power spectra then also show a dependence on frequency from 353 GHz down to 100 GHz which matches previous Planck results (the dependence is something close to a blackbody spectrum at 19.6 K, but with a specific modification).

It then turns out that if the sky is split into very many much smaller regions close to the poles rather than the 6 large ones above, the same results continue to hold on average, though obviously there is some scatter introduced by the fact that dust in different bits of the sky behaves differently. So this allows the Planck team to take the measured dust intensity in any one of these smaller regions and extrapolate down to see what the contribution to the BB power would be if measured at the BICEP2 frequency of 150 GHz. The result looks like this:

The level of dust contamination across the in measurements of the primordial B-mode signal. Blue is good, red is bad. The BICEP2 window is the black outline on the right.
This really sucks for BICEP2, who chose their particular patch of the sky precisely because, according to estimates of the 1990s and early 2000s, it was supposed to have very little dust. Planck is now saying that isn't true, and that there is a better region just a little further south. Even that better region isn't perfect, of course, but it may be clean enough to see a primordial GW signal of $r\sim 0.1$ to $0.2$ — if such a signal exists, and if we're lucky and/or figure out cleverer ways of subtracting the dust foreground.

The problem with the BICEP2 region is that Planck's estimate of the dust contribution there looks like this:

Planck's estimate of the dust contribution to the BB power spectrum at 150 GHz and in the BICEP2 sky window. The first bin is the one that's most relevant. The black line is the contribution primordial GW with $r=0.2$ would make, if they existed.
So it appears that in the BICEP2 window, in the $\ell$ region where primordial gravitational waves produce a measurable BB signal (and BICEP2 has measured something), dust is expected to produce the same amplitude of signal as does an $r=0.2$. In fact, even accounting for the uncertainties in the Planck analysis (the extent of the pink error bars on the plot) it is clear that (a) dust will be contributing significantly to the BICEP2 measurement, and (b) it's pretty likely that only dust is contributing.

Planck avoid explicitly saying that BICEP2 haven't seen anything but dust. This is because they haven't directly measured the dust contribution in that window and at 150 GHz. Rather what's shown in the plot above is based on a number of little steps in the chain of inference:
  1. generally, the BB polarization amplitude is dependent on the average total dust intensity in a region;
  2. the relationship between these two doesn't vary too much across the sky;
  3. generally, the frequency dependence of the amplitude shows a certain behaviour;
  4. and again this doesn't appear to vary too much across the sky
  5. Planck have measured the average dust intensity in the BICEP2 window, and this gives the value shown in the plot above when extrapolated to 150 GHz;
  6. and the BICEP2 window doesn't appear to be a special outlier region on the sky that would wildly deviate from these average relationships;
  7. so, the dust amplitude calculated is probably correct.
Update: See the correction in the comments — the Planck paper actually does better than this. That is to say, they present one analysis that relies on all steps 1-7, but in addition they also measure the BB amplitude directly at 353 GHz and extrapolate that down to 150 GHz relying only on steps 3 and 4. The headline result is the one based on the second method, which actually gets a lower number for the dust amplitude. 

So they leave open the small possibility that despite having been unlucky in the original choice of the BICEP2 window, we've somehow ultimately got very lucky indeed and nevertheless measured a true primordial gravitational wave signal. 

Time will tell if this is true ... but the sensible betting has now got to be that it is not.

Incidentally, I have just learned that in two days' time I will be presenting a 30 minute lecture to a group of graduate students about this result. The lecture is not supposed to be very detailed, but I'm also not very much of an expert on this. So if you spot any errors or omissions above, please do let me know through the comments box!

Monday, August 25, 2014

A Supervoid cannot explain the Cold Spot

In my last post, I mentioned the claim that the Cold Spot in the cosmic microwave background is caused by a very large void — a "supervoid" — lying between us and the last scattering surface, distorting our vision of the CMB, and I promised to say a bit more about it soon. Well, my colleagues (Mikko, Shaun and Syksy) and I have just written a paper about this idea which came out on the arXiv last week, and in this post I'll try to describe the main ideas in it.

First, a little bit of background. When we look at sky maps of the CMB such as those produced by WMAP or Planck, obviously they're littered with very many hot and cold spots on angular scales of about one degree, and a few larger apparent "structures" that are discernible to the naked eye or human imagination. However, as I've blogged about before, the human imagination is an extremely poor guide to deciding whether a particular feature we see on the sky is real, or important: for instance, Stephen Hawking's initials are quite easy to see in the WMAP CMB maps, but this doesn't mean that Stephen Hawking secretly created the universe.

So to discover whether any particular unusual features are actually significant or not we need a well-defined statistical procedure for evaluating them. The statistical procedure used to find the Cold Spot involved filtering the CMB map with a special wavelet (a spherical Mexican hat wavelet, or SMHW), of a particular width (in this case $6^\circ$), and identifying the pixel direction with the coldest filtered temperature with the direction of the Cold Spot. Because of the nature of the wavelet used, this ensures that the Cold Spot is actually a reasonably sizable spot on the sky, as you can see in the image below:

The Cold Spot in the CMB sky. Image credit: WMAP/NASA.

Well, so we've found a cold spot. To elevate it to the status of "Cold Spot" in capitals and worry about how to explain it, we first need to quantify how unusual it is. Obviously it is unusual compared to other spots on our observed CMB, but this is true by construction and not very informative. Instead the usual procedure quite rightly compares the properties of the cold spots found in random Gaussian maps using exactly the same SMHW technique to the properties of the Cold Spot in our CMB. It is this procedure which results in the conclusion that our Cold Spot is statistically significant at roughly the "3-sigma level", i.e. only about 1 in every 1000 random maps has a coldest spot that is as "cold" as* our Cold Spot.** (The reason why I'm putting scare quotes around everything should become clear soon!)

So there appears to be a need to explain the existence of the Cold Spot using additional new physics of some kind. One such idea that that of the supervoid: a giant region hundreds of millions of light years across which is substantially emptier than the rest of the universe and lies between us and the Cold Spot. The emptiness of this region has a gravitational effect on the CMB photons that pass through it on their way to us, making them look colder (this is called the integrated Sachs-Wolfe or ISW effect) — hence the Cold Spot.

Now this is a nice idea in principle. In practice, unfortunately, it suffers from a problem: the ISW effect is very weak, so to produce an effect capable of "explaining" the Cold Spot the supervoid would need to be truly super — incredibly large and incredibly empty. And no such void has actually been seen in the distribution of galaxies (a previous claim to have seen it turned out to not be backed up by further analysis).

It was therefore quite exciting when in May a group of astronomers, led by Istvan Szapudi of the Institute for Astronomy in Hawaii, announced that they had found evidence for the existence of a large void in the right part of the sky. Even more excitingly, in a separate theoretical paper, Finelli et al. claimed to have modeled the effect of this void on the CMB and proven that it exactly fit the observations, and that therefore the question had been effectively settled: the Cold Spot was caused by a supervoid.

Except ... things aren't quite that simple. For a start, the void they claimed to have found doesn't actually have a large ISW effect — in terms of central temperature, less than one-seventh what would be needed to explain the Cold Spot. So Finelli et al. relied on a rather curious argument: that the second-order effect (in perturbation theory terms) of this void on CMB photons was somehow much larger than the first-order (i.e. ISW) effect. A puzzling inversion of our understanding of perturbation theory, then!

In fact there were a number of other reasons to be a bit suspicious of the claim, among which were that N-body simulations don't show this kind of unusual effect, and that several other larger and deeper voids have already been found that aren't aligned with Cold Spot-like CMB features. In our paper we provide a fuller list of these reasons to be skeptical before diving into the details of the calculation, where one might get lost in the fog of equations.

At the end of the day we were able to make several substantive points about the Cold Spot-as-a-supervoid hypothesis:
  1. Contrary to the claim by Finelli et al., the void that has been found is neither large enough nor deep enough to leave a large effect on the CMB, either through the ISW effect or its second-order counterpart — in simple terms, it is not a super enough supervoid.
  2. In order to explain the Cold Spot one needs to postulate a supervoid that is so large and so deep that the probability of its existence is essentially zero; if such a supervoid did exist it would be more difficult to explain that the Cold Spot currently is!
  3. The possible ISW effect of any kind of void that could reasonably exist in our universe is already sufficiently accounted for in the analysis using random maps that I described above.
  4. There's actually very little need to postulate a supervoid to explain the central temperature of the Cold Spot — the fact that we chose the coldest spot in our CMB maps already does that!
Point number 1 requires a fair bit of effort and a lot of equations to prove (and coincidentally it was also shown in an independent paper by Jim Zibin that appeared just a day before ours), but in the grand scheme of things it is probably not a supremely interesting one. It's nice to know that our perturbation theory intuition is correct after all, of course, but mistakes happen to the best of us, so the fact that one paper on the arXiv contains a mistake somewhere is not tremendously important.

On the other hand, point 2 is actually a fairly broad and important one. It is a result that cosmologists with a good intuition would perhaps have guessed already, but that we are able to quantify in a useful way: to be able to produce even half the temperature effect actually seen in the Cold Spot would require a hypothetical supervoid almost twice as large and twice as empty as the one seen by Szapudi's team, and the odds of such a void existing in our universe would be something like a one-in-a-million or one-in-a-billion (whereas the Cold Spot itself is at most a one-in-a-thousand anomaly in random CMB maps). A supervoid therefore cannot help to explain the Cold Spot.***

Point 3 is again something that many people probably already knew, but equally many seem to have forgotten or ignored, and something that has not (to my knowledge) been stated explicitly in any paper. My particular favourite though is point 4, which I could — with just a tiny bit of poetic licence — reword as the statement that
"the Cold Spot is not unusually cold; if anything, what's odd about it is only that it is surrounded by a hot ring"
I won't try to explain the second part of that statement here, but the details are in our paper (in particular Figure 7, in case you are interested). Instead what I will do is to justify the first part by reproducing Figure 6 of our paper here:

The averaged temperature anisotropy profile at angle $\theta$ from the centre of the Cold Spot (in red),  and the corresponding 1 and $2\sigma$ contours from the coldest spots in 10,000 random CMB maps (blue). Figure from arXiv:1408.4720.

What the blue shaded regions show is the confidence limits on the expected temperature anisotropy $\Delta T$ at angles $\theta$ from the direction of the coldest spots found in random CMB maps using exactly the SMHW selection procedure. The red line, which is the measured temperature for our actual Cold Spot, never goes outside the $2\sigma$ equivalent confidence region. In particular, at the centre of the Cold Spot the red line is pretty much exactly where we would expect it to be. The Cold Spot is not actually unusually cold.

Just before ending, I thought I'd also mention that Syksy has written about this subject on his own blog (in Finnish only): as I understand it, one of the points he makes is that this form of peer review on the arXiv is actually more efficient than the traditional one that takes place in journals.

Update: You might also want to have a look at Shaun's take on the same topic, which covers the things I left out here ...

* People often compare other properties of the Cold Spot to those in random maps, for instance its kurtosis or other higher-order moments, but for our purposes here the total filtered temperature will suffice.

** Although as Zhang and Huterer pointed out a few years ago, this analysis doesn't account for the particular choice of the SMHW filter or the particular choice of $6^\circ$ width — in other words, that it doesn't account for what particle physicists call the "look-elsewhere effect". Which means it is actually much less impressive.

*** If we'd actually seen a supervoid which had the required properties, we'd have a proximate cause for the Cold Spot, but also a new and even bigger anomaly that required an explanation. But as we haven't, the point is moot.
counter on blogger