How does vr lens work




















However, the lenses themselves are still discrete units separated by a gap, no matter how small. Under certain conditions, the god rays effect can even worsen into full-blow lens flares. The god rays effect is easily one of the most hated artifacts in VR entertainment, to the point where people have been modding their Oculus Rift CV1 headsets with spare parts from the Samsung Gear VR to eliminate it. The reduction of god rays has also become one of the chief areas of focus in ongoing product research and innovation efforts of major VR companies like Samsung and Oculus.

Although this solution may reduce the number of ridges where light can scatter, having bigger sections of thin lenses can significantly reduce the sharpness of the VR image. Considering how much of the cost of these VR headsets go into high-resolution displays, ruining the image quality because of the lenses seems like a backward step.

More recently, Facebook filed a patent for a hybrid Fresnel lens, presumably for their Oculus headsets. The hybrid design has a standard, non-Fresnel lens near the center to reduce the god rays effect. The Fresnel lenses are then used along the periphery where the god rays effect will not be as egregious. This design may not be as small and light as a full Fresnel lens, but it may address one of the biggest optical artifacts that have persisted through this generation of VR headsets.

However, VR lenses have been around since the time that VR headsets merely consisted of a cardboard frame and a pair of straps. They basically act as the interface between our eyes and the high-resolution display that sits a mere few inches away. Simple as they may seem, lenses have become a focal point of the innovation efforts of VR technology companies.

Optical aberrations still continue to become a problem even in the current generation of VR headsets. Your email address will not be published. Sign me up for the newsletter! Posted on October 20, We may earn money from your clicks, at no extra cost for you. We are also affiliates of numerous other programs. Outbound clicks may earn the site money. We may get compensated in other ways too. Please read our Privacy Policy. The value is achieved when a DX-format compatible lens is attached to a DX-format D-SLR camera and zoom lenses are set at the maximum telephoto position or when an FX-format compatible lens is attached to an FX-format D-SLR camera and zoom lenses are set at the maximum telephoto position.

Nikon VR originates in the lens, not in the image sensor, which means that algorithms optimized to an individual lens are applied. Another advantage of lens-based VR is that a separate algorithm confirms the stabilization effect when you press the shutter release button halfway, giving you the freedom to compose your image more easily.

Nikon VR lenses use two angular velocity sensors, one that detects vertical movement pitch , the other, horizontal movement yaw , with diagonal motion handled by both sensors working together.

The sensors send angular velocity data to a microcomputer in the lens, which determines how much compensation is needed to offset the camera's shake and sends that information to a duo of voice coil motors that move selected lens elements to compensate for the detected motion. What does this mean in practical terms? It provides you with up to four stops of "hand-holdability," delivering dramatically sharper images in a wide range of conditions.

Not all anti-shake technologies are the same. The in-camera anti-shake technology used by some manufacturers relies on a process that actually shifts the image sensor, and its performance benefit is generally agreed to be limited to about one-and-one-half to two stops.

For Nikon photographers, an additional two stops of VR performance capability can easily be the difference between a blurry picture and a beautiful sharp one. But the benefits of Nikon VR aren't limited to shutter speeds.

Consider shooting on an overcast day at a medium ISO where greater depth-of-field might be desirable. Yes, this rule flies in the face of what almost everyone in the world seems to do and what Nikon implies with their advertising and marketing.

The simple fact is that VR is a solution to a problem, and if you don't have that problem using VR can become a problem of its own. To understand that, you have to understand how VR works. In the Nikon system, VR is essentially an element group in the lens that is moved to compensate for any detected camera motion.

Because this element group is usually deep in the middle of the lens—typically near the aperture opening or entry point of the lens but often not exactly so—you have to think about what is happening to the optical path when VR is active. Are there times when it shifts where it imparts a change to the image quality other than pure stabilization? I believe there are, though the impact is visually quite subtle. Some of the mid-range distance bokeh of certain VR lenses appears to be impacted by VR being on the older mm was notorious for this.

Put another way, the background in the scene is moving slightly differently than the focus point in the optical path due to the position of the VR elements. This results in what I call "busy bokeh," or bokeh that doesn't have that simple shape and regularity we expect out of the highest quality glass. Most people using VR don't question the mechanics of the system. They simply believe it's some special form of magic. It's not. Physics are involved, not magic. And one of the physics issues is the sampling and movement frequencies.

The sampling frequency of the motion detection mechanism determines what kind and how much movement can be removed. Care to guess what the sampling frequency might be? That sounds pretty good, doesn't it? While this sampling frequency is of the camera motion, it is not completely uncorrelated with shutter speed. Simply put, there's a lot that has to be right at very short shutter speeds in order for there not to be a small visual impact, especially with long lenses.

That plus the friction and inertia in the VR element gimbal itself are going to be relevant, as well. In other words, you might be able to detect a movement as small as Hz, but you may not be able to perfectly correct for it.

But that's not all: when you have VR turned on, your composition isn't necessarily going to be exactly what you framed. This means that you can get slightly different framing than you saw.

Note: this seems to be changed in Sport VR mode which is not the default setting, thus my wording, above. Indeed, if you go down to the sidelines of a football game and check all those photographers to see how their lens is set, you can tell the ones that are really pros: VR is usually off unless they're on a portion of the stadium that is vibrating from fan action. Those pros have all encountered the same thing you will some day: if you have a fast enough shutter speed, sometimes the system is running a correction that's not fully in sync with the shutter speed.

The results look a bit like the lens being run with the wrong AF Fine Tune or diffraction is being recorded: absolute edge acuity is hurt a bit. The interesting thing is that pros demanded VR IS in the case of Canon in the long lenses, then it turned out that they very rarely use it! A word of advice: some of those previous generation non-VR exotics are relative bargains.

Consider it the VR bubble. Some day people will stop paying such silly premiums for VR over non-VR. At least they should. That's too much of a premium for the benefit, I think. Especially considering many of those photographers were using monopods or tripods with gimbal heads!

Anecdotal evidence continues to pile up about VR and high shutter speeds. That's my own experience, as well. In almost all of those cases I've been able to find that it's not VR itself that's helping remove camera motion, but that their handholding or tripod technique is such that they're not getting consistent autofocus without VR, but they are with it.

However, as with virtually everything in photography, there's a caveat to the above. One of the things that Nikon just doesn't explain well enough is the concept of "moving camera" versus "camera on moving platform.

However, if there's a platform underneath you causing vibrations car, boat, train, plane, helicopter, etc. Active VR should be used when you're on one of those moving platforms. Normal VR should be used when you're on solid ground and it's just you that's shaking.

Basically, if you're vibrating due to outside source, Active VR should be On. If you're the only source of camera movement, then use Normal. The difference between Active and Normal VR modes has to do with the types of movements that are expected and which VR will attempt to correct. Handholding motion tends to be slower and move in predictable paths e.

Knowing which type of motion the VR needs to deal with lets the system optimize its response. The motion that you impart by your handholding may not get corrected right by the VR system because your shutter speed is faster than the frequency with which corrections are well managed. But the platform you're sitting on is imparting small, frequent, and random motions that might actually be corrected but probably not fully by having VR on.

The question here is whether the improvements due to removing some of the platform movement are better than the possible degradation due to the shutter closing faster than the VR is working.

There's no clear answer to that, as every situation is going to be a little different, but my tendency is to experiment with Active VR being On versus VR being totally off when shooting from moving platforms at high shutter speeds. I closely examine my initial results, and make my final decision based upon that. Of course, that in and of itself can be a problem for some, as examining a small screen in a moving vehicle isn't exactly easy and precise.

Still, I sometimes see an improvement with VR as opposed to without it when I'm shooting at high shutter speeds from a vehicle. At the same time, that's not as much improvement as you'd see using a dedicated gyroscope instead of VR. If you regularly shoot out of helicopters, a gyro is a better investment than a more expensive VR lens.

Some explain that shutter speeds above that are done by moving an opening across the image rather than having the full image exposed simultaneously this is a simplification, but it's good enough for this discussion.

Nikon claims that they now can distinguish between camera movement and platform movement based upon the information the gyros are providing the system. In such cases, turn off Sport VR and use Normal if you're on an active platform. At the other end of the movement spectrum, we have subject motion. If the subject is moving, using VR with longer shutter speeds can be problematic.

This is a tough thing to learn, and it's usually learned the hard way. But the only motion being removed by the system is camera motion. This is, of course, a generalization. There's a more detailed table below the one I just referenced that shows how distance impacts the shutter speed, too. Plus the size of the subject in the overall frame makes a difference.

Expecting VR to remove all motion including subject motion is something everyone has to get over:. Another type of motion comes with panning the camera, and VR has impacts there, too. I've seen people say that they think you should turn VR off when you pan with a subject.

That's because the Nikon VR system is very good about detecting a constant camera movement. If you're doing a smooth pan in one direction, the VR system will focus on removing only motion on the opposite axis.

That's the way it's designed to operate. The trick is to make sure that your pan is relatively smooth, and not jerky. Most people start to jerk when they press the shutter release during pans.

You need to practice NOT doing that and to continue the pan while the shutter is open, not stopping. Indeed, try practicing this at your local track or other place with some runners present. Pan with the runner and take a picture. When the mirror returned and the viewfinder view is restored after the shot is the runner still in the same spot in the frame? Then you didn't continue panning through the shot. Tsk tsk.

Try again. Practice until you can take a series of shots and the runner stays in the same spot through the entire sequence, both in the shots and while you're panning between shots.

You shouldn't be having to catch up to the runner. Aside : Back in high school my photography mentor at the time broke me of the habit of stopping during pans in a brutally sadistic way: he sent me to track meets with a TLR twin lens reflex. You look down into the viewfinder of a TLR. But here's the thing: left to right is reversed.

So if the subject is moving right to left in front of you, they appear left to right in the viewfinder. You don't have a chance of following motion with a TLR unless you can relax your brain and make your camera motion just mimic the motion of your subject.

You can't look and react, look and react. Yet another aspect of VR that confuses people is activation. A partial press of the shutter release always engages VR and allows it to begin a sequence of corrections.



0コメント

  • 1000 / 1000