Reevaluating the MLE, and Negative Intercepts
By Elijah Bernstein-Cooper, July 22, 2015, 0 comments.

Table of Contents



Likelihoods

The likelihood calculations discussed in the previous post are incorrect. There was a bug related to masking NaNs in the HI cube. All figures below were calculated using these versions of cloud_analysis.py and cloudpy.py.

The likelihoods do indeed require a rescaling of the error after the first MLE calculation, and recalculating the MLE with the rescaled errors. Below are the results.


Perseus, Lee+12

Figure 1. - Perseus likelihoods with Lee+12 data. These likelihoods are not similar to those found in previous iterations of the MLE analysis. The width is much larger, and the intercept is near 0. The column densities will be much more similar to Lee et al. (2012) for a width of 40 km/s than found earlier, about 13 km/s. The bulk of the emission is included within a width of 20 km/s, thus increasing the width to 40 km/s will not change much. The distributions are discussed later in the post



Perseus, Planck

Figure 2. - Perseus likelihoods with Planck data. These likelihoods are very similar to those found with the Lee+12 data above. This is promising!



Taurus

Figure 3. - Taurus likelihoods. These likelihoods are not similar to those found in previous iterations of the MLE analysis, which found widths of about 10 km/s. This is likely because there were many NaNs in the cube which distorted the likelihood calculation.



California

Figure 4. - California likelihoods. These likelihoods are similar to those found in previous iterations of the MLE analysis. This means that using either the faint-end or bright-end residual masking selects the same diffuse pixels for the MLE analysis.


Evaluating MLE Fits

We can plot vs. to examine the distribution of points and the fitted relationship. I have included the MLE fit as well as a polynomial fit to the data. We can see for cloud the masking process has successfully masked outliers which do not follow a linear correlation between and . That is, for the masked data, is linearly dependent on . See yesterday’s post for the first failed attempt at examining this relationship (the MLE relationship I was plotting incorrectly loaded the MLE parameters, hence were wrong).

If we compare the masked, binned data to the entire unbinned dataset we confirm once again that the masking is working correctly. If all pixels were included, or some cutoff, the relationship would be skewed to a lower DGR and higher intercept, e.g. if in Figure 5, we masked pixels where mag, then more pixels would populate the lower .


Perseus, Lee+12

Figure 5. - Perseus vs. using Lee+12 data. The top shows the data points which the MLE of the parameters were fitted to, i.e., the masked data and corresponding fit. The bottom shows all the unbinned data. The contours are increasing logarithmically, with about 10,000 points included. Each plot shows the model line determined by the MLE method, as well as a polynomial fit to the data.



Perseus, Planck

Figure 6. - Perseus vs. using Planck data.



Taurus

Figure 7. - Taurus vs. .



California

Figure 8. - California vs. .


Understanding the Intercept

A negative intercept means that there is excess emission, or rather background . In California, there is even a background (see here) of about mag. A negative intercept of means that there is about mag of associated with an background. Given the DGR in California is cm mag, the background is about cm.

vs. distributions of each cloud. Even with the negative intercept in California, most of the is above 0. This should relieve the problem discussed in post on negative surface densities.


Perseus

Taurus

California

Figure 9. - Perseus, Taurus, and California vs. distributions derived from Planck data. It is apparent that California and Taurus have a much wider distribution of thresholds corresponding to large hikes in .