Night vision is grainy but viewable.
Well the reason it's grainy is because of possibly one of three different things, or even more likely a combination of the three.
First, the camera sensors have clusters of sensitive areas and each cluster of three individual sensitive areas detect either Red, Blue or Green (or more likely Magenta, Cyan & yellow), making up nearly the full spectrum of visible light and colors from white to black. Since only the red sensor is sensitive to the red end of the spectrum (i.e. including near infrared) the other two colors, blue and green see essentially no light in near total darkness.
As a result that effectively reduces the resolution of the other color representation by a factor of three (i.e. a 9 MP camera looks more like a 3MP camera), and therefore you naturally have a more grainy image. This is the exact opposite of how displays work, in that the individual pixels actually are comprised of three tiny red, blue and green lights (LEDs), clustered together to make up the colors ranging from black to white and a very good cross section of the colors in-between. So in the sensor it would be like turning off two out of every three LEDs on your display monitor (the blue and green ones) and leaving those areas black and having the picture only showing with the red LEDs.
Second, with low light photography you have a phenomenon where the sensor seeing nearly no light will instead create "noise" (specks of random black and light pixels), across the entire screen. Stray electrons moving about near the sensors (or bleedover), among other issues cause this. This shows up in the picture as what looks like moving (in video), or static (in pics), dirt specs, and a resulting graininess. From Wikipedia (
How to Avoid Noise in Your Digital Photography
"In a digital camera, noise manifests itself as speckles, usually colored and without pattern. These are generally caused by unwanted electron flow in and around the sensor adding to the desired electron flow. Noise can be caused by imperfections in the sensor itself such as "hot pixels"
[4], randomness in the distribution of the limited numbers of photons available at low light levels,
[5] and the sensor or camera overheating
[6]."
Third, through software the app will turn up the sensitivity of the red sensors and turn down or off the blue and green ones further solidifying the visually apparent factor of three reduction in resolution.
Professional infrared cameras use sensors that have only red and IR sensitive areas so there isn't empty dots in every direction in two out of every three areas of the sensor clusters
Sent from my XT1585 using Tapatalk