Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple's camera design choices from the iPhone 6S Plus to the 12 Pro Max [pdf] (systemplus.fr)
226 points by giuliomagnifico on July 8, 2021 | hide | past | favorite | 119 comments


Kind of interesting but the good stuff is blurred out.

Makes me wish that these big tech companies would release the full uncensored data for their obsolete products. How nice would it be if Apple shared a huge archive of technical info on the original iphone. Surely there is nothing left on that device that is secret or of any use outside of curiosity.


Would be nice, but it’s virtually impossible. Products like the iPhone are a combination of many different suppliers’ parts and many different vendors’ work, all with various contractual requirements.

There’s almost zero upside for companies to go through all of the effort of preparing and releasing that information, however trivial it may seem from the outside.


This is why an EOL right to repair law covering devices would be progress. Alter the expectation from "Apple would have to ask" to "Apple expects their suppliers to comply with the law."


If you select the blurred section in a browser, then paste it somewhere, you can view the text/information. Worked for me in most of the sections, but not all.


Ah, accessibility features! Classic example of why you need to check all meta-data from PDF documents before releasing.


You can remove the blur from the images by editing the source code of the pdf with a text editor...


Looks like 4 MB of Perl. What's a good string to search for?


How are you able to copy and paste the blurred sections? I can't even select them.


Has anybody reverse engineered the interface for Apple's camera modules, particularly the front modules with LIDAR?

They can be had for $10 a pop on Alibaba now, and they're much nicer than similarly priced webcams let alone LIDARs.

Seems like a part that's ripe for a raspberry pi interface board.


I would be very surprised if the camera modules put out anything different than the standard image sensor interfaces such as SPI, MIPI, and SLVS. Anything else would require extra circuitry that is too big. Then it just becomes a case of figuring out the control for the lens module.


I'm actually surprised that nobody tried to poke into those lines yet and built something similar to https://hackaday.com/2021/07/05/how-to-drive-smartphone-scre... but for the camera.

Actually, since that project above links to a iPhone 4 screen as an example, I'd perhaps err into probing for MIPI signals on an iPhone 6S camera first, perhaps using sth like https://hackaday.com/2018/11/29/mipi-csi-2-implementation-in...?

This post is fascinating, btw: https://www.circuitvalley.com/2020/01/spi-mipi-bridge-fpga-v...


The front iPhone camera uses an infrared array of dots for FaceID, not LIDAR. It's basically a miniaturized version of the Kinect from the Xbox (Apple bought the company that created it).


The Face ID/TrueDepth sensor is however made by Austrian 'ams AG', not Apple.


The sensor is from ST. Besides many previous publications/reveng/leaks, this is also in the systemplus slides on page 16, top right.


This, though? https://www.reuters.com/article/apple-forecast-ams-idUSL8N1Z...

AMS provides Apple with optical sensors for 3D facial recognition features on its newest smartphones

“We see a risk that Apple moves to dual sourcing for the face ID – which currently is single sourced from AMS - in order not to rely on deliveries from just one supplier and also in order to have a favorable pricing power,” said Hauck & Aufhaeuser analysts in a note to clients.


I wonder what quality the “raw” camera has. My understanding is that the software has a bigger difference on the end result than the actual optics. So much so that Apple has a dedicated image processor as part of the Bionic chip.

Not to mention the “neural engine” that’s used for depth processing and segmentation.


Image from any digital camera (SLR or Phone) is result of computation, unlike that from film camera where image is result of chemical reactions.

Optics (glass) does not play much role in phone camera cost or product development; but the cost (effort) is a factor of sensor, CPUs, NPUs, DSPs (hardware) and then investment in writing image processing software.


> software has a bigger difference on the end result than the actual optics

Not everyone wants artificially manipulated photos that you can instantly upload to instagram without any post work. But, you do need good optics and sensor to capture good photos - the rest is post manipulation. Do you think RED or ARRI cameras cost $50k+ for their built in software? Nope, they don't actually have any of the "neural net" hype you've bought into from Apple's marketing. It's all done in post. Practically nothing you see in the movies or TV or Netflix or other professional broadcast is done on Apple toy cameras using their state of the art "neural engine" software. I haven't read about or seen any, but I reserve the right to be wrong so I said "practically."

Besides, you got to start somewhere and the camera interface would come before the splashy software built on top of it.

Just give me the RAW/flat log files (that contain the most information and dynamic range) and I will do the rest with a real editing program that cost almost as much as your phone itself, and look way better.


It’s not (just) marketing. You can try to “brute-force” your way up in image quality by increasing sensor size, better lens, etc, but there is a point where it gives you diminishing returns.

I’m not deeply familiar with high-end professional cameras, but low-mid pro cameras sort of slept for a long time while phone cameras had to play around the trivial upgrade route of bigger sensors and go the harder route to more intelligent sensors. There is a really interesting article about it that I may try to dig out, but there are such great tricks in modern phones like using the not that great lens stabilization to an advantage where minor movements of our hand will render the same physical position to several adjacent “colored” sensor positions and a clever NN can use this additional information to increase the lost sensors size inherent in colored picture taking. Also, HDR photos simply need intelligence.

All in all, in certain (rare) situations a newer iphone may very well shoot a much better photo than a DSLR camera no matter the comparatively tiny sensor size. Also, just think about the way the black hole was photographed - it also didn’t use an Earth-sized sensor, but the effect was similar if it have used one.


Your RAW/flat log files are also artificially manipulated too FYI and when you edit RAW, you just delay that artificial manipulation until you get to a computer.

Short of a 3CCD prism setup, most cameras use Bayer (or other) filters that capture a close but false image that then is reprocessed back to an approximation of the scene.

When people talk about smartphones having better processing, it just means on top of demosaicing, the software might be applying additional algorithms.

P.S. RED cameras don’t use 3CCD yet still cost $50k+ implying artificial manipulation is the way to go due to cost and physical limitations.


Are they available for $10 or is that the price some "iPhone Teardown!!" article wrote?

Because I tend to be sceptical of those articles. They start with the premise of "everything is worth as much as the sum of its component parts", which is already hit on their credibility (if they did the same, recursively, they'd notice that nothing is worth anything). Then, they guess some random values for Apple's purchasing costs.


I'm not sure why you asking this. The poster quite obviously indicates you can trivially buy them on Aliexpress. You can verify this yourself in less time then it took to type this comment. And indeed the front-facing camera can be had for very cheap. The back module with Lidar is more expensive though.


If you are interested in reverse engineering and cost analysis, Munro associates does the same for cars.

Their youtube chain is really interesting, you should check it out: https://www.youtube.com/channel/UCj--iMtToRO_cGG_fpmP5XQ


That’s a great channel, thanks for the recommendation. It speaks to their reputation that they got a one-on-one interview with Elon Musk.


It’s pretty good content, but you should know some history. He first trashed Tesla model 3, but after huge Internet outcry realized that there’s a lot of money in catering to fans. He’s now a proper Tesla fanboy, making money on Tesla stock and selling sticker to the fans. A lot of credibility is gone. But content is still pretty good to learn basics of stuff like that works, just keep in mind bias in it.


How can you tell the difference between the events you described, and Munro genuinely changing his mind about Tesla and the Model 3?


Watched the video about reducing the VW ID.4 battery tray yesterday; it was really interesting to hear how they collaborated with so many groups to redesign it.

https://m.youtube.com/watch?v=cjJUpqo1YDM


In my experience with dual camera it does portraits and HDR with sky good, but not so great with greenery. I’m enthusiastic about larger sensors like in Sharp Aquos R6 https://www.phonearena.com/news/sharp-aquos-r6-camera-sample...


I just can't understand why the Japanese companies manage to create these amazing devices with absolutely niche pricing and subpar software.

It seems like Chinese companies figured out a while back that they need to make their software at least seem good if they want to capture the global market. Obviously the price is an even bigger aspect of the equation.


Maybe the are happy selling to Japanese market’s geeks exclusively. By geek I mean this kind of tech fans who don’t mind figuring out the interface. As for this particular device I think Japan is a perfect test market: they seem to love DSLRs and this phone is sort of a pure camera experience vs neural network enhanced multi-camera setup. I hope it catches on so more companies make such phones and drive prices down.


Very cool I hope it catches up. I prefer one excellent camera over 3 or 4 average ones.


I think it’s similar to analogue vs digital filmmaking. Analogue is harder to do, but can be uniquely amazing.


Wow @ how badly this site is broke with ad blocker and even more wow when you disable it.


And pictures load so slowly. I found another site with R6 picture samples and they are compared with iPhone 12 pro (mostly), Galaxy s21 ultra and Xiaomi Mi 11 Ultra https://www.xda-developers.com/sharp-aquos-r6-hands-on/


has anyone else got the impression that default compression ruined the image quality on very recent devices ? i moved from iphone se2 to 12 pro and took pictures of trees and grass, and was wondering why all my pictures looked blurred.

Then i activated the uncompressed pro format and got much better results (with images weighting a ton more of course)

i still have to get back to the se2 and see if i have the same problems but it was the first time i was completely unimpressed by the camera when upgrading my phone.


The iPhone 12 Pro models seems to have had a software issue[1] that resulted in poor focusing — this was fixed later. A workaround at the time was using a different camera app, eg the link recommends ProCam but I suspect any RAW capture app would do.

If it’s taking blurry photos still, it probably is defective and needs to be returned.

[1] https://piunikaweb.com/2020/12/09/iphone-12-pro-camera-focus...


Thanks for the links. I looked at the video and in my case wasn't that blurry (and not particularely for close-ups). The problem is just that patches of grass were smeared when zooming in just a little (after taking the shot).

Moving to the pro format solved the issue, so my guess is that it's more a problem with the compression / post processing than with the auto-focusing part (although i must say i'm not really sure what constitutes post-processing anymore with modern camera)


The default processing is fairly aggressive about noise reduction, yes. Here's a comparison on the 12 Pro Max. You can see that e.g. the brickwork and the clock face are clearer in the raw file. Many photography nerds would probably be perfectly happy with the (low) level of noise in the unprocessed RAW shot, but perhaps regular people's preferences differ in this regard.

The original images:

https://drive.google.com/drive/folders/1-2NQ0meQ-PgNvgObWmPT...

A cropped comparison saved as a high quality JPEG, for easier viewing:

https://drive.google.com/file/d/1VEMVPGDE7ck-0du9QYxydM1a3P8...

I don't think the compression is the issue here, by the way. It's just that when you don't shoot RAW you are getting the default denoising+sharpening.


Yeah, that's exactly the thing i've noticed.

But why ?? isn't a camera supposed to give you "crisp" pictures instead of artistic / smeared renditions of reality ?

I thought the blame could be put on AI-based post-processing going out of control, but your comment seems to imply it's made on purpose. I really don't get the point.


It has (or at least aggressive denoising) since the 6S I still have.

If you look at the images at 1:1, they sometimes look like watercolours with regards to edges.


This is very true in dim lighting. Take a photo in low light and any skin imperfections are washed away. Less true in daylight conditions, which is when mobile cameras shine, in my opinion.


Every camera shines in daylight conditions… some of them literally.

All phone cameras have tiny apertures and tiny pixels (relatively speaking to their DSLR and medium format counterparts). One can’t use them for any kind of serious photography.


There are lots of people making money off photos taken with phone cameras. Of course photography snobs will not consider these people to be “serious” photographers. Serious photography means taking photos of bored-looking attractive women in awkward poses with a blurry background :)


Just as true for the 6S (and friends' more modern iPhones) in bright sunlight which should be at ISO 100 in my experience...


Oh no, a small sensor like the iPhone's is quite noisy at ISO 100. The base ISO for these is around 20-25.


I have issues since my iPhone 7 with pictures turning into watercolour renditions when zooming in any further than screen size. My wife’s iPhone 7 doesn’t have this issue. My 11 pro doesn’t have this issue as badly but grass seems like it is replaced by some texture instead of reality. Seems like overly aggressively compression in most cases.


I've noticed the same, it's frustrating that the defaults would be this extreme especially given how much storage those phones have nowadays.


To others who might be curious, the iPhone 12 now supports raw photo mode. Older models do not natively have this.


Nope, previous iPhones also supported RAW photos (my iPhone Xs does).


Natively? I just researched and it sounds like only iPhone 12 can take raw photos w/o a 3rd party camera app.


No, not natively in the sense the built-in Camera app can do it, but that's not what the comment was saying.


I'm pretty sure the 6s was the first iPhone to support RAW.



For a reverse engineering company... Applying the blur as a PDF postfilter that can easily be removed with a grep on the pdf source code seems not very smart...


All the rear cameras have been 12MP since the 6s?


Bear in mind the photos will usually be viewed on a 1080p phone screen which is ~2MP. Or a 4k display which is ~8MP. The sensor has been getting bigger since the 6s which means more light for each pixel.


Camera pixels are not like screen pixels, more like screen subpixels. Each one has a color filter in front in a Bayer pattern, meaning a 2MP screen more closely corresponds to a 6MP camera sensor than a 2MP sensor. Respectively a 4K display already goes to 24MP camera sensor equivalency, which is quite a lot - many of the modern full-frame cameras have 24MP and more than that is considered high-resolution.


I'm not sure where you got this impression. I've never heard it before.

My iPhone 11 takes photos that are 4000 x 3000 RGB pixels. That's 12 million picture elements, or 36 million sub-pixels. It's rated as 12 MP.


He does have a point in that a 12 MP CMOS sensor will have 12 million sensor elements, not 36 million. Colour filters are placed in front of each pixel, so RGB data can be extracted. Usually, 1/4 of the pixels are R, 1/4 are B and 1/2 are G. The raw sensor data for each pixel thus contains either R, G, or B, of varying intensities, depending on the passbands of each filter. The data is combined using a demosaicing/debayering filter/algorithm to extract subpixel data. That is, surrounding colour information is combined so that each pixel has R, G, and B elements.

Sorry if the writeup isn't that specific, I mostly work with monochrome CMOS cameras.

https://en.wikipedia.org/wiki/Demosaicing https://en.wikipedia.org/wiki/Bayer_filter

edit: I should also state that I don't know anything about iPhone cameras. It's quite possible, but not typical that they have a 36 MP sensor producing 12 MP images.

edit 2: I read that the iphone 12 has 1.7 um pixels. A 36 MP 4:3 sensor with 1.7 um pixels would be 8.3 mm. A 12 MP 4:3 sensor would be just 6.8 mm wide.


no the sensor has rows of 4000 grayscale pixels with different color filters on them. the actual rgb resolution is actually a quarter but the debayering algorithm upscales the data by 2x in each direction. So yes the RGB resultion is the same as the subpixel resultion, but at the same time it isn't.


You're correct. The numbers that are promoted for just about any camera out there refer to the actual size of the output, not the number of elements in the sensor. I'm not sure where the parent comment was getting his info from.


The actual size of the output is not the actual size of the sensor. The color data is interpolated. The promoted size is the output size, but that's not really full subpixel resolution in the monitor sense.

The promoted size is the number of photosites on the sensor, but each photosite is grayscale. Look at any RAW camera format or the datasheet of a sensor (e.g. one of popular Sony sensors). All of that applies to Bayer sensor and not Foveon, but Foveon is not particularly popular by any measure.


That's Bayer pattern interpolation.


But what about zooming in?


It would be a noisy high-resolution image. If the pixels aren’t big enough to pick up enough statistical photons, then you get a mess.


I've played with a friends Samsung S21 Ultra 108 MP and it blows out of the water my iPhone 12 Pro Max, provided you can keep it stable - portraits were insanely realistic, yet I was unable to capture a single photo of my toddler!


X Megapixels lost all meaning a while ago.

The size of the sensor and the smarts behind it matter a lot more


Even during the megapixels wars, Fuji released some great point and shoot cameras that had less megapixels but better image quality due to having a bigger sensor.

For example, the f31d was great in its time and showed that megapixels weren't the only measure to look for.


> The size of the sensor and the smarts behind it matter a lot more

And the lenses.

My backup DSLR shooting at 6MP with $100 of glass on the front looks infinitely better than the photos out of my iPhone 12 Pro.


I've often wondered if Apple were to put their computational photography magic into a traditional camera (DSLR or similar), what would the result be?

I'm thinking if it were a good idea, then Canon, Sony, and Nikon would all be doing that.


It would be a good idea and I think, that’s the future. They are just asleep and hoping to get away with “lazy” work.


I feel like that's not giving the iPhone 12 camera nearly enough credit, but yes having infinitely more space for optics helps


Oh, I give the iPhone a ton of credit. It generally looks good enough, the HDR on video makes shit look better than reality, the low-light performance beats the absolute pants off of anything else I own by a _huge_ margin.

I bought an iPhone 12 Pro pretty much entirely as a camera to use to capture a bunch of photos of my kid because (1) I don't have my DSLR within arm's reach in the house all the time and (2) our house isn't very well lit. I have zero complaints about it. I probably take photos with my iPhone versus my actual photo gear at least 20:1, if not more.

The photos look great on a phone screen.

But even just printing a 4x6 photo from the iPhone and a 15 year old DSLR and putting them side by side, the difference is immediately obvious. The iPhone photos are disgustingly oversharpened while somehow being weirdly smoothed, but in all the wrong spots.

I grabbed a couple of my own photos taken on the same day in the same place under largely the same lighting conditions (it's never exact) and zoomed/cropped them to help demonstrate and for privacy. Neither have been adjusted at all (I did try and improve the iPhone photo best I could, but really managed no substantial looking improvement so posted it as-shot), so focus more on the definition in the hair: https://imgur.com/a/7bn8W5S

See if you can guess which came from an iPhone 12 Pro, and which came from a Nikon D70 (about a $100 camera body at this point).

That all said, the entire point was... expecting anything more than 12MP out of an iPhone is absolutely pointless because the sensor size and optics simply aren't there to actually produce a better photo.


>the difference is immediately obvious. The iPhone photos are disgustingly oversharpened while somehow being weirdly smoothed, but in all the wrong spots.

That's just the iPhone's default JPEG processing. You can avoid that by shooting RAW. I posted a comparison in this comment: https://news.ycombinator.com/item?id=27769511

Not sure what is going on with your comparison crop. The phone photo seems to actually be out of focus in the relevant area, not to mention overexposed.


Technically, iPhones use HEIF, which uses HEVC, which uses DCT and DST and variable block sizes.

JPEG uses DCT with fixed block sizes. The results of the two file formats are similar.


Yes, sure. I mean that it's the result of sharpening, noise reduction, and the other processes that are typically applied to a raw image.


There is some good cheap glass, but I am not aware of any good $100 glass. I also assume that 6mpix camera has to be quite old, and that means that computational photography in iPhone 12 wrt dynamic range will be vastly superior. In any case "infinitely better" sounds like a big stretch. Can you share some examples?


> Can you share some examples?

The main issue, as far as I understand it, is that the tiny sensor and tiny lens at some point just end up limiting the effective resolution. You can capture more pixels, but you never really increase the actual effective resolution of the end product. At that point you're not accomplishing much more than scaling up a lower resolution picture.

The "computational photography" usually ends up just vastly over-doing the sharpening and creating artifacts, and leaving weirdly smooth gaps where there was nothing to sharpen but the camera completely failed to pick up detail.

Just a couple random photos I grabbed from a day I took my kid to the park, but not necessarily like... scientific-paper worthy examples: https://imgur.com/a/7bn8W5S

One's taken with my iPhone 12 Pro, one with a Nikon D70.

Mainly, though, I base this off of trying to print any of the photos I've taken, and maybe someone else can hop in and explain this for me. I could be way off base here.

Printing photos from my iPhone at 4x6 they all look like someone took a really soft or slightly out-of-focus photo, applied way too much sharpening, and printed it. Printing a photo from my D70 generally still looks crisp and natural. Even if I blow it up beyond the actual resolution, it looks pixelated... not soft and then sharpened. This is even though the iPhone has twice the resolution of the D70.


Maybe we should measure resolution in TV lines per image height instead of pixels. It sounds like an archaic measurement but then all the clever processing tricks and dodgy pixel counting wouldn't hide the true resolution.


Lumix 25mm f/1.7 is "good" $147 glass - https://www.bhphotovideo.com/c/product/1182677-REG/panasonic....

For it's price it is pretty unbeatable, if you're in the MFT ecosystem.


I doubt it.


It matches my experiences too, FWIW. I've never seen anything out of a phone come close to my ancient D90 with f/1.8 35mm lens - with the caveat, for things that the f/1.8 is good at, portraits mostly.

The computational composition (I wouldn't call it photography) done by modern phones also just looks weird. For example, trying to detect foreground and artificially blur the background, leaving a weird in-focus-yet-distant halo of texture around things.


Why would you doubt it?


"Infinitely better" is the problem. My d7100 and z5 take amazing pics. They are also the only option for anything requiring zoom. But, for snapshot style pics the iPhone 12 Pro is great, particularly in challenging light situations.

In studio light situations, fstoppers did a video years ago that had the iPhone (6 maybe?) produce pictures that rivaled the dslr at the time.

It's simply more complicated than infinitely better.


Not the person you’re responding to, but lenses don’t improve image quality. All optics have some defects, and high quality lenses will have fewer of them. But a larger, more expensive lens primarily allows for a wider range of aperture and greater control of depth of field. That does allow for more artistic photographs, but objective image quality does not improve.

If you’re still skeptical, consider a pinhole camera with no lens capturing an image at near infinite focus. Adding a piece of glass that refracts the light can’t perform any better than pure parallel rays of light.


Ever heard of noise?

Bigger lenses mean more light. More light means better signal to noise ratio. That's an objective metric of image quality.

Phone cameras can try to compensate for this by taking long exposures - actually videos - and warp the frames into alignment to remove blur, but it's a losing game if there's a lot of motion. Or the camera can try and guess the textures and replace them with similar ones, like the Gigapixel AI and similar services do - but then it's starting to move away from capturing the actual scene.

I mean, most of the photos I take are with my phone, because it's the handiest available camera. I don't think people talking about quality from dedicated cameras are trying to say that they're better than phones as a practical tool for taking everyday photos. But you do lose some quality and computational tricks can't get it all back.


A pinhole camera does have a large depth of field, but (due to diffraction) usually produces a less sharp image than a decent focused lens.

And with a moving subject, a lens obviously makes a sharper picture, since the pinhole requires such a long exposure.


Because it is a big claim. And there are many different kinds of photographs you can take in many different situations, so something what is an advantage in one situation is a disadvantage in another.

Computational photography can overcome some of the optical limitations of the small sensor. E.g. you need a really good camera to get the dynamic range that is available with iPhone HDR.

Old camera (I am not aware of any modern cameras having 6mpx sensors) is an old camera. Cheap glass is a cheap glass. The bigger the sensor the better your lens has to be optically, because light is bent at the bigger angles.

The two most obvious areas (ergonomics and interchangeable lenses aside) of advantages of the "proper" cameras with bigger sensors are low-light performance and "bokeh". Otoh, if you want to have as much in focus as possible small sensors win.


>The bigger the sensor the better your lens has to be optically, because light is bent at the bigger angles.

The angle that the light is bent depends on the angle of view, not the size of the sensor.


No. No matter if you have wide-angle lens or the telephoto lens, you still need to cover the whole sensor with the image and the distance from the last glass element of the lens to the focal plane does not differ that much.


Don't forget the Lumia 1020 with its "41MP camera" back in 2010 ha


No "quotes" necessary, it does have a 41MP sensor.


See I never got that, I have an A7R3 42MP and its photos far surpass the Lumia 1020 so what gives (other than lens/age).

And sensor size, that's why I don't understand that metric. Yeah the resolution is right eg. ~8Kx4K

Bigger pixel is the difference apparently


It really is sensor size.

For two sensors of the same resolution (= number of photosensitive cells), a physically larger sensor will have a greater number of photons hitting each photosite. This means better signal-to-noise ratio (fewer photons = base electrical noise is greater in relation to the signal) and dynamic range.

So inescapably, due to basic laws of physics, for two identically implemented sensors of the same resolution, the larger one will always be better.

This doesn't mean you can't make miniscule sensors with 41MP -- you certainly can. There can even be advantages to doing so. The Nokia PureView cameras were based on a novel concept that by capturing very high resolution images (41MP) you can then smooth out the noise (because it is essentially random) while retaining huge amounts of detail if you downsample them to a reasonable size (something like 12MP? I forget what it does exactly) in post-processing. It is a tradeoff -- you trade worse dynamic range for better detail -- but it worked really well for a smartphone at the time.

If you were to pixel peep the 41 MP files before downsampling they would look horrible, especially at higher sensitivities.


> For two sensors of the same resolution (= number of photosensitive cells), a physically larger sensor will have a greater number of photons hitting each photosite.

Nitpick: this assumes that you're holding the f number constant. In practice smaller sensors tend to be used with smaller f numbers, which somewhat offsets the effect (especially if you are not someone who's given to shooting everything wide open on your DSLR).

A more useful way to think about it is to forget about the sensor and just consider the absolute diameter of the aperture (for a given angle of view). Your phone's aperture is a few mm in diameter. If you're shooting at the same angle of view with your DSLR, then the amount of additional light hitting its sensor (as compared to the phone) is in proportion to the additional diameter of the DSLR's aperture. So if you're shooting at, say, f16, you may not be getting any more light than the phone is at f1.8.


> Nitpick: this assumes that you're holding the f number constant. In practice smaller sensors tend to be used with smaller f numbers, which somewhat offsets the effect (especially if you are not someone who's given to shooting everything wide open on your DSLR).

Well most phone cameras seem to be around f/2, some slightly above, some a little below. The archetypal nifty fifty is f/1.8 or f/2 as well, and primes in that range are usually available for most applications and reasonable in price. Slightly slower primes at f/2.8 are often also available and cheaper. So dit-for-dat, you'd expect a full-frame camera to have at least 6 EVs lower noise than your average 1/3.something inch phone camera sensor (crop factor of ~10, area difference of ~100, ld(100) = 6...).

Your entrance pupil metric is really just a roundabout way to compensate for the crop factor of the sensor to get to the same FoV. The relevant property for exposure is the f-stop.


F-stop is the relevant property for exposure, but not for the total amount of light collected by the sensor, which is what determines the noise level (all else being equal). Exposure is light per unit area.

I do think that focusing on sensor size is unhelpful when thinking about noise levels. Big sensors do not magically collect more light simply in virtue of being bigger. They can only do so if you’re able to put a bigger hole in front of them (again, holding constant the angle of view). The use of very wide apertures is inherently more practical with smaller sensors.

As for using wide apertures with a DSLR, this is of course possible, but only in cases where a shallow depth of field is acceptable. On a cell phone camera f1.8 will almost always give sufficient depth of field. Realistically speaking most photos on a DSLR will be taken a few stops down from that.

It’s undoubtedly the case that DSLRs have an advantage over phones in terms of noise levels, but you have to consider the whole optical system to estimate the magnitude of the difference, not just the size of the sensor.


I see thanks for that explanation. It's a race now... GAS as they say, now there's the 100MP one A1.

Random side note: it is cool how we adjust our perception of HD eg. back then 480P was "pretty good" ha.

Good lord just looked up one of the Nokia phones and it has 5 cameras on it dang...


I had a Nokia 808 PureView with a 38MP camera and a real flash, you were able to save the full 38MP jpg and the quality was really good for the time in daylight, but it performed badly in low light and saving the picture was slow at 38MP.


Pixel count matters if you are coding for a device with 32MB of RAM to handle it. 41 million pixels at 24 bits/pixel is 123MB.


I wouldn't try to keep a full pixel array in main memory when it is so limited. I'd keep the image in storage and decode as needed. It's a lot of software work to implement a special decoder and you'll always face a significant performance penalty, but it's doable.


haha I'm struggling already with a non frame-by-frame scanning of an 8MP sensor with a single core pi. Some cool tricks though (like non-zero bit)


I'd suggest playing with such camera before commenting. Maybe it doesn't matter for day to day point and shoot, but you can capture insane pics with 100MP sensor.


Why don't manufacturers advertise that more clearly? I've never seen that information on the simple specs that most models list out.


For a long time it was very easy to advertise cameras by the number of megapixels, and consumers supposedly like easy to understand numbers where bigger is better. This used to be kind of true back in the day of <6MP but raw megapixel numbers have been fairly useless for fifteen years or so.

It's a slow process moving away from this once you've drilled it into people's heads with marketing. It's just like how MHz used to be an easy and fairly reliable way to compare CPU's in the 90's and it took a really long time for the general public to understand that clock speed isn't all that matters.


I’ll take that a step further and contend Apple wants to do away with advertising technical specs: resolution, speed, capacity, etc should be good enough that users just don’t care to know. At that point, most customers shy away from competition because the tech numbers are incomprehensible or deceptive.


I'm looking forward to further movement of the market in that direction. Some products aren't the centerpiece of your life and you want it to just work without consuming your mental effort. Enthusiasts can still read the spec sheets or reverse-engineered estimated spec sheets if it comes to that, but normal people can get on with their life.

I don't know the storage capacity or resolution of my phone, nor the engine size of my car because they're good enough.


It's not easy to qualitatively represent your fancy HDR algorithms and focus finding, especially in the realm of phones where there are fewer standard terms to play with (even something as simple as OIS means nothing to 99% of buyers)

That's why you see a focus on showing off the end results now, like Apple's rebooted "Shot on iPhone" campaign or Samsung's current photo competition restricted to photos taken with a Galaxy S21


Well, not for video, where you need e.g. 12MP minimum to do 4K.


4K is 3840*2160 = roughly 8.3 megapixels.


That's at a 16:9 video ratio though. There are no phone camera sensors out there with a 16:9 ratio. That would be useless for taking still photographs.

Most DSLR/mirrorless cameras have a 3:2 ratio, whereas smartphone cameras tend to have a 4:3 ratio.

If you want to make a smartphone camera sensor that can do 3840 pixels across that can shoot 4k, the 4:3 sensor would need to be 2880 pixels tall. So 3840*2880 = 11.1MP would be the bare minimum size for a smartphone camera that can shoot 4K.

In practice you'd want something larger than that, so you can do DCI 4K (which is a bit higher resolution) and do things like gyroscopically assisted electronic image stabilization (which benefits from a few extra pixels around the border). The iPhone can probably implement video stabilization that rivals GoPro's HyperSmooth stabilization.


Hmm, had in mind DCI which consider quite larger than UHD 4K, but yes, it's still ~ 9MP.


> YouTube, since 2010,[51] and Vimeo allow a maximum upload resolution of 4096 × 3072 pixels (12.6 megapixels, aspect ratio 4:3)

I guess it really depends on the aspect ratio you're going for.

https://en.wikipedia.org/wiki/4K_resolution#Video_streaming


I've never personally seen that resolution used, nor called "4K". If you're going to lump together a resolution with 50% more pixels under the same umbrella term then IMO "4K" is meaningless and we need more granular terminology.


Yeah, and now if something doesn't have enough to do 4k there's:

a) nothing stopping them from faking it, plenty of crappy hardware will record files in an upscaled resolution (see webcams especially for this)

b) you probably don't want a larger version of the output of that device anyways.

I mean even a 130 dollar phone has a 13 megapixel camera. https://www.amazon.com/BLU-Android-Factory-Unlocked-Display/...


I wonder if anyone can reverse blur images in the PDF.


So this is basically just an advertisement?


Not quite because some useful infos can be viewed. Unfortunately the full pdf is available after payment. By the way I’m absolutely not involved in any way with the site/producer of the report!


Pretty much, but that isn’t an instant disqualifier. Lots of “Show HN” links are basically ads.

If people think it’s interesting (ad, or not), it will be promoted. If not, it won’t live long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: