How can apparent magnitude be negative?

  • What is the reason for that scale? Is it because otherwise defining an maximum would be too hard(?). Why do objects that are (apparently) brighter get assigned a smaller number (and down to negative)?


    Remember that the word magnitude means "greatness" or "importance". Something of the "1st magnitude" is very important, whereas something of the "2nd magnitude" is less great/important.

  • Apparent magnitude is measure of how bright an object appears to an observer on Earth, meaning it's a function of both the object's intrinsic luminosity and its distance from us. The concept of magnitudes dates back to the Ancient Greeks, when stars in the sky were categorized into six magnitudes (the brightest being 1 and the faintest being 6). Each successively lower magnitude was twice as bright as the one before, meaning the scale was logarithmic. We still use magnitudes for historical reasons, though the scale was later standardized to use the formula



    $m_x - m_{x,0} = -2.5\log_{10}(\frac{F_x}{F_{x,0}})$



    where $m_x$ and $F_x$ are the magnitude and flux of the object of interest and $m_{x,0}$ and $F_{x,0}$ are the magnitude of flux of a reference object (where usually Vega is used to define the 0 point in magnitude). This means any object that appears brighter than Vega has a negative magnitude. There is no limit to how bright an object can appear, so there is no lower limit to magnitudes. The sun, for example, being the brightest object in our sky, has a magnitude of roughly -27.


    As a picky detail, I think it's 100^(1/5), not 2.5 precisely.

    Nope, it's precisely 2.5 in the formula. The factor of $100^\frac{1}{5}$ is the increase/decrease in flux corresponding to a magnitude change of -/+ 1 mag.

    Can you source that? Wikipedia doesn't appear to agree with you.

    Wikipedia does agree with me - the first equation on the page for apparent magnitude is precisely what I have above. Further, the first paragraph on the page says "In addition, the magnitude scale is logarithmic: a difference of one in magnitude corresponds to a change in brightness by a factor of $\sqrt[5]{100}$ or about 2.512." You can also show this with a simple calculation, comparing an object with magnitude -1 to an object with magnitude 0: $-1=-2.5\log_{10}(\frac{F_x}{F_{x,0}})$ yields $\frac{F_x}{F_{x,0}}=10^{0.4}\sim2.512$

    I think its important to include the reason as to why only 6 magnitudes were chosen, not 5 or 10 etc. That reason being that the average human eye can only differentiate between 6 different magnitudes, the Ancient Greeks would have had a hard time narrowing the stars down into smaller categories (e.g. 1.5, 2.5 3.5 etc.) without the aid of any sort of optics (remember not everyone would have had 20/20 vision either).

    @barrycarter: I think the confusion is caused by the fact that the ratio between two successive magnitudes (100**(1/5), or about 2.512) happens to be quite close to 2.5. But the `2.5` in the formula is exact. It's 5 / log(100). There just happen to be two relevant numbers, one an exact fraction and one an irrational number.

  • The magnitude scale is a logarithmic scale. An increase of 1 magnitude corresponds to a decrease in brightness of about 2,5 times dimmer. Vega, a bright star has a magnitude of 0, so any star that is brighter than Vega would have a magnitude that is less than 0.



    This is an odd system; the reason for it is historical. The ancient Greeks ordered stars by their brightness into categories : the first magnitude stars were the brightest the sixth magnitude were the dimmest. When it became possible to exactly measure the light intensity from stars, the scale was chosen so as to approximate the traditional magnitudes, but with this formal measurements of brightness, the logarithmic nature of the scale made it inevitable that the brightest objects would have a magnitude that was below 1, or even negative.


    It just didn´t seem like a reasonable scale but if it´t origins are historic then that is fine.

    It is no more unreasonable than temperatures being negative, but nobody uses the Kelvin scale except scientists.

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM