Fashion Leaders on the Fight for Diversity
While certain brands are being heralded for their use of models of color, size, and various gender identities this season, the Fashion Spot has given their annual Diversity Report, and the numbers are clear, the rest of the fashion industry needs to do better.
This year was only a .4 percent increase in racial diversity from last season. Plus-size representation has remained relatively stagnant. Only two shows were exclusively plus-size, Torrid and Addition Elle, and as per usual, Chromat and…
Whether you’re aware of the correct terminology or not, you have likely experienced color contamination happening in your photographs already. Put simply, color contamination is when one color is affected by the presence of another color in close proximity.
For example, if you’re photographing two friends side by side, one of them is wearing a white t-shirt and the other one is wearing a red t-shirt, the white t-shirt will likely take on a pinkish tone due to the fact that it’s receiving bounced light from the red t-shirt close by.
This color contamination effect has nothing specific whatsoever to do with photography as it happens around us all day every day and we are so accustomed to it that most of us never even notice it. So why bring it up? I bring it up because it’s a frustrating effect when it happens in our shots, especially if we aren’t aware of what’s causing it.
We may even just write it off as a white balance issue or other color balance problem as it’s usually so subtle we might not even try to correct it. But when color contamination is at it’s most intense, we have to take note and address it.
Think about doing a portrait shoot in the woods. You’re surrounded by green, the leaves in the trees, some bushes and maybe there’s even green grass on the floor around you. The daylight comes through the trees and bounces around on all the foliage before it hits your subject resulting in some very sick-looking green subjects. Not a great look. Think about how many woodland portraits you’ve seen that have been converted to black and white. Starts to make more sense now right?
Can’t I Just White Balance My Shots?
White balance exists on the Kelvin scale that specifically deals with balancing a certain range of colors, so no matter how hard you try, a lot of these color contamination shots simply can’t be fixed with white balance alone, hence the black and white solution.
But more than that, color contamination is often a localized effect. Let’s go back to that white t-shirt that looks a little pink now because it was next to a red one. We can’t color balance the scene to correct the shirt without affecting the whole image. It’s these factors that make color contamination such a troublesome problem and one that is incredibly overlooked.
What is This ‘Radiosity’ Thing?
Strangely, radiosity is what I was taught 20 years ago in the film days but you hardly hear the word used in association with photography anymore. Now the word is more related to how light and color act upon one another in computer generated worlds. In fact, one of the greatest leaps forward 3D modeling has made was to accurately model how light affected one surface when in proximity to another.
Without getting too nerdy, 3D modelers ironically love radiosity as it gives their worlds and textures an added depth and realism. We as photographers, specifically portrait photographers hate it and we try and color balance it away where we can. If you’re interested then you can take a look at the Radiosity in Computer Graphics page on Wikipedia, but be warned: there’s a whole lot of maths involved.
Regardless of what you want to call it, this color contamination effect is a very real problem for us photographers if we want to depict objects like people, cars, clothing and so on in the best possible way.
No company wants you to photograph their white car only for it to look a ‘little pink’ on one side, and the same goes for fashion as well. We need to be aware of what colors we’re putting next to one another.
Color Contamination in Action
In the images below, I set up a mini set to illustrate the color contamination effect in action. I purchased three spheres, the cue ball with its very shiny surface, the table tennis ball with its very matte surface and the golf ball for its very textured surface. I placed them all on a white surface and shone a single light at them with a variety of colored papers next to them and took shots to document the whole thing.
Look closely at the shots below to see just how the different surfaces and textures are affected by the close proximity of color.
Taking a Closer Look
Upon first impressions you may not think it’s a big deal because our eyes are so accustomed to normalizing color variance when it’s in proximity to similar tones, but as the images change you should be able to see just how dramatic the effect is.
To further cement my point, I’ve isolated the separate spheres in the images below and placed them next to the image of the spheres shot against the white. In isolation like this, the effect is a lot more visible and significant, to say the least.
How Can I Use This Knowledge?
You may look at the images above and think that it’s just a byproduct of taking photos, that there’s no use worrying about something that can’t be helped. Although there are times when this can’t be avoided, color contamination is very real and it is something we can limit a lot if we’re careful.
For example, think twice about photographing the bride right next to a huge bunch of flowers, that green will bounce back onto the face. Consider bringing her slightly forward to avoid that or look at alternatives.
Think about the effect of photographing a model next to a brightly colored car or building. You don’t need to avoid the shot but there are things you can do to limit the effect, like always having the face pointed away from the brightly colored object.
As I documented in the images above, if you can’t avoid the color contamination, always try to have the offending color in the actual shot. The effect is dramatically reduced visually if the eye can see where that color is coming from compared to if you crop it out.
Can I Use This Knowledge to My Advantage?
The good news is that you can use this color contamination effect to your advantage if you’re clever. Remember that this radiosity isn’t exclusive to color — you can use blacks and greys to add dimension to your subjects and objects. You’ll often see studio photographers using black polyboards (large polystyrene boards) either side of the model to control the light, this not only controls the light but also adds a lot of shape through shadow in the process.
I will always carry black velvet sheets with me on location to limit the bounce of light around a subject but I also have sheets of grey card in the studio that are less severe than black to add a little definition to the features where necessary.
In the sphere comparison photos above, look at the light grey and dark grey images compared to the black and white images. See how they shape the spheres differently though shadow? Use this to your advantage either in the studio or on location.
Also, consider taking a white sheet with you on location too. Along with my black velvet, I always have a white sheet with me that I can throw up to either bounce in some light or limit the color contamination of nearby colored surfaces.
Fire Your Assistant if They Look Trendy!
Many years ago I was photographing fashion in natural light at the beach. A pretty easy job but the issue was that when I got the images back and started working on them I saw a very ugly and insipid looking greenish tinge to some of the clothing and skin. It was only apparent in some of the shots and it was always localized to certain areas.
It took me a very long time to work out what this was until I remembered that my assistant on the day had a bright yellow/green t-shirt on. In some of the shots, he was in very close to the model holding a reflector just out of shot but not only was he bouncing in light from the reflector, he was also bouncing in light from his hideously ugly t-shirt.
People joke about my grey sweatshirt but trust me, if you’ve ever tried to color balance out greenish tinges to skin you’ll switch to looking boring as hell like me in a heartbeat. When I was assisting all those years ago back in London in the film days, black shirts were mandatory on set, no ifs or buts. Now the sets are a kaleidoscope of color balancing nightmares. Take a look at the BTS of the film industry — how many lighting technicians are you seeing wearing day-glo?! Not many.
I know I sound like a grumpy old man, and although it’s a very real problem it actually only affects certain situations like still life shooters with shiny surfaces or macro beauty work etc. Still life shooters who photograph metal or other shiny surfaces nearly always wear all black to avoid this. Either way, it’s very wise to be aware of it and advise assistants on set to dress appropriately where necessary.
I think this color contamination effect is an incredibly overlooked aspect of modern photography due to the “I’ll fix it later in post” mindset. Not only is it very time consuming to fix it in post but it’s also practically impossible in certain situations due to the colors being outside of the white balance spectrum.
If you’re aware of the colors around you when you’re shooting then you can limit the effect or use it to your advantage where necessary.
Points to Remember
Points to remember
Think about the color of surfaces around your subject.
Should I use another area like a white wall nearby instead.
Look at how multiple subject colors interact with one another when in close proximity.
Bring a black and white sheet of fabric with you on location to throw over brightly colored objects if you need to.
Consider getting some dark and light grey card for the studio and use it as a bounce board instead of white. This will give more shape to you subject than just a white bounce board.
Think about what the people on set are wearing. If assistants are going to be close to the final shot, get them to change any brightly colored outfits.
Think about what YOU are wearing. If you’re a macro beauty shooter who will be inches away from your subject, you definitely don’t want to be wearing bright colors as it will most certainly have an effect on the shot.
About the author: Jake Hicks is an editorial and fashion photographer based in Reading, UK. He specializes in keeping the skill in the camera and not just on the screen. If you’d like to learn more about his incredibly popular gelled lighting and post-pro techniques, visit this link for more info. You can find more of his work and writing on his website, Facebook, 500px, Instagram, Twitter, and Flickr. This article was also published here.
Nikon D500 Says Battery Empty with 25% Juice Left: Report
If you’re a Nikon D500 owner who feels like batteries run out of juice faster in the camera than in other Nikon DSLRs, it may not be just you. A new report suggests that the D500 measures remaining capacity differently and doesn’t use up all available charge.
“I found one surprising issue related to how Nikon D500 measures remaining battery capacity,” Beran writes. “D500 displays lower remains capacity than what other Nikon bodies do.”
Through his experiments, he found that an official EN-EL15 or EN-EL15a battery that shows as completely empty in a D500 is still able to shoot 100 to 200 more photos in other Nikon DSLRs. Here’s a short video he made to demonstrate this “Batterygate” issue:
And although the D500 shows fully-charged batteries as 100% correctly in the battery indicator, partially used batteries always show less charge remaining than on other compatible Nikon bodies (Beran has tested this with the D7200, D750, D810, and D850).
“The less capacity is remaining, the difference between D500 and others is bigger,” Beran writes. “If a battery has about 20-30% of remaining capacity in other bodies, the D500 qualify such battery as empty and does not turn on at all.”
Beran created this chart showing how the remaining capacities of six different batteries were measured in 5 different Nikon DSLR bodies:
Beran also tested this on multiple D500 units of different ages just to make sure it wasn’t a flaw with a single camera or a single batch from the factory.
After contacting Nikon Service, Beran learned that Nikon isn’t aware of any differences in battery metering between the Nikon D500 and other Nikon cameras.
“My conclusion is that all D500 cameras by design measure remaining capacity of battery lower than other Nikon bodies,” Beran writes. “What makes me confused is the fact that Nikon D500 is not able to fully utilize the capacity of the battery. The number of shots per battery can be about 25% higher if D500 would be able to utilize the battery the same way as other bodies.”
How I Created a 16-Gigapixel Photo of Quito, Ecuador
A few years ago, I flew out to Ecuador to create a high-resolution image of the capital city of Quito. The final image turned out to be 16 gigapixels in size and at a printed size of over 25 meters (~82 feet), it allows people see jaw-dropping detail even when viewed from a few inches away.
I’ve always thought that gigapixel technology was amazing since I first saw it around 8 or 9 years ago. It combines everything that I like about photography: the adventure of trying to capture a complex image in challenging conditions as well as using high tech equipment, powerful computers, and advanced image processing software to create the final image.
I’ve been doing this for a while now, so I thought that I would share some of my experiences with you all so that you can make your own incredible gigapixel image as well.
The picture was made with the 50-megapixel Canon 5DSR and a 100-400mm lens. It consists of 912 photos with each one having a .RAW file size of over 60MB. To create the image a robotic camera mount was used to capture over 900 images with a Canon 5DSR and 400mm lens. Digital stitching software was then used to combine them into a uniform high-resolution picture.
With a resolution of 300,000×55,313 pixels, the picture is the highest resolution photo of Quito ever taken. This allows you instantly view and explore high-resolution images that are over several gigabytes in size.
The first step in taking the photo is site selection. I went around Quito and viewed several different sites. Some the sites I felt were too low to the ground and didn’t give the wide enough panorama that I was looking for. Other sites were difficult to access or were high up but still not able to give the wide panoramic view that I was looking for.
I finally settled on taking the image from near the top of the Pichincha Volcano. Pichincha is classified as a stratovolcano and its peak is over 15,000ft high. I was to access the spot via a cable car and it gave a huge panoramic vista of the entire city as well as all the volcanoes that surround Quito.
The only drawback that I saw to the site is that I felt that it was a little too far away from the city and I didn’t think that people would be able to see any detail in the city when they zoomed in. To fix this situation I decided to choose a site a bit further down from the visitor center. That meant that we would have to carry all there equipment there (which isn’t easy at high altitudes) but I felt that it would give the best combination of a great panoramic view and be close enough to the city for detail to be captured.
The site was surrounded by very tall grass as well as a little bit of a hill that could block the complete view so I decided to set up three levels of scaffolding and shoot from the top of that. There wasn’t any power at the site since it was on the side of a volcano so we had to bring a small generator with us.
I ran extension cords from the generator up to the top of the scaffolding where it powered the panorama head, as well as my computer. I didn’t plug in the camera in because I would be able to easily change the batteries if they ran out.
Anything that affects the light rays on their path to the camera’s sensor will affect the ultimate sharpness of the image. Something that is rarely mentioned is the effects of the atmosphere on high-resolution photos. Two factors are used to define atmospheric conditions: seeing and visibility.
Seeing is the term astronomers use to describe the sky’s atmospheric conditions. The atmosphere is in continual motion due to changing temperatures, air currents, weather fronts and dust particles. These factors are what cause the star images to twinkle. If the stars are twinkling considerably we have “poor” seeing conditions and when the star images are steady we have “good” seeing conditions.
Have you ever seen a quarter lying on the bottom of a swimming pool? The movement of the water makes it look like the quarter is moving around and maybe a little bit blurry. Just as the movement of water moves an image, atmospheric currents can blur a terrestrial image. These effects can be seen in terrestrial photography as the mirage effect, which is caused by heat currents and also as a wavy image due to windy conditions. It’s interesting to note that seeing can be categorized according to the Antoniadi scale.
The scale is a five-point system, with 1 being the best seeing conditions and 5 being the worst. The actual definitions are as follows:
Perfect seeing, without a quiver.
Slight quivering of the image with moments of calm lasting several seconds.
Moderate seeing with larger air tremors that blur the image.
Poor seeing, constant troublesome undulations of the image.
Very bad seeing, hardly stable enough to allow a rough sketch to be made.
(Note that the scale is usually indicated by use of a Roman numeral or an ordinary number.)
Visibility: The second factor that goes into atmospheric conditions is visibility, also called visible range is a measure of the distance at which an object or light can be clearly discerned. Mist, fog, haze, smoke, dust and even volcanic ash can all effect visibility.
The clear high altitude air of Quito made for some amazing visibility the day of the shoot. The only things that affected it that day were a few small grass fires in the city. The Cotopaxi volcano was also giving off smoke and ash but it didn’t seem to be a problem since it was blowing away from the city. There also weren’t any clouds in the sky which made it so that the exposure wouldn’t be affected by any clouds blocking out the sun.
Camera: I decided to use a 50 megapixel Canon 5DS R. The 5DS R is an amazing camera that is designed without a low-pass filter which enables it to get amazing pixel-level detail and image sharpness.
Lens: A Canon 100-400mm f/5.6 II lens was used to capture the image. Several factors went into the decision to use this lens such as size, wight and focal length. The 100-400mm was small and light and would allow the robotic pano head to function with no problems. It also has a good focal length of 400mm with would allow for some nice detail to be captured.
I thought about using a 400mm DO and 400mm f/2.8 but each had its own drawbacks. The 400mm DO didn’t have a zoom and I wanted to be able to change the focal length for different types of captures if I had any problems and the 400mm f/2.8 was too big and heavy to be used properly in the pano head. I have a Canon 800mm f/5.6 which I would have loved to have used but it was also too heavy to be used with the robotic pano head (humble brag).
Another interesting factor that went into my decision to use the 100-500mm f/5.6 is that the diameter of the front lens element was small enough so that atmospheric distortion wouldn’t be too much of a problem. I have spent a lot of time experimenting with astrophotography and the larger the front lens element is the more atmospheric distortion or “mirage effect” will be picked up resulting in a blurring of the photo.
Pano Head: I used a GigaPan Epic Pro for the image capture. The GigaPan is an amazing piece of equipment which automates the image capture process. The GigaPan equipment is based on the same technology employed by the Mars Rovers, Spirit, and Opportunity, and is actually a spin-off of a research collaboration between a team of researchers at NASA and Carnegie Mellon University.
To use a GigaPan you first need to set it up for the focal length of the lens that you are using. You then tell it where the upper-left-hand corner of the image is located and where the bottom-right-hand corner of the image is. It then divides the image into a series of frames and automatically begins scanning across the scene triggering the camera at regular intervals until the scene is completely captured.
There are several other brands of panorama heads out there including Nodal Ninja and Clauss-Rodeon but I like the GigaPan the best since it is automated, simple and reliable. The GigaPan is also able to be connected to an external power source so the battery won’t run out during large image capture sequences.
Computer: I didn’t think that the memory card would be large enough for all the images to be stored on it especially since I was going to be making multiple attempts at capturing the image. I decided to shoot with the camera tethered to a MacBook Pro via Canons EOS Utility. This software not only allowed me to write the images directly to my hard drive, it also allowed me to zoom into the image in live view to get critical focus. Just in case something went wrong I simultaneously wrote the images to an external hard drive as a backup.
Aperture: I set the aperture to f/8. This was done for a couple of reasons. The first was to increase the resolution of the image. Although the Canon 100-400mm f/5.6 II is a very sharp lens shooting wide open, stopping down the lens a little bit increases its sharpness. Stopping down the lens also reduces vignetting, which is a darkening of the edges and corners of the image.
Although the vignetting is minimal on the lens, I have found out that even the slightest amount of vignetting on the frame will result in dark vertical bands being shown in the final stitched image.I didn’t want to stop down the aperture too much because I was worried about diffraction reducing the resolution of the image.
Focal Length – I shot at 400mm so I could capture as much detail in the city. I could have used a 2x teleconverter but there was so much wind at the site that I was afraid that the camera would move around too much and the image would come out blurry.
ISO: I shot at an ISO of 640 due to all the wind at the site. I knew that using a high ISO would increase my shutter speed and reduce the chance of vibrations from the wind blurring the photo.
Shutter speed: All of these factors combined gave me a final shutter speed of 1/2700.
RAW: I shot in .RAW (actually .CR2) to get the maximum resolution in the photos.
Live View: I used the cameras live view function via Canons EOS Utility to raise and lock the mirror during the capture sequence. This reduced the chance of mirror slap vibrating the camera.
The GigaPan has a lot of different settings for the capture sequence of the images. One can shoot in columns from left to right or in rows from the top down and left to right or any combination thereof. I choose to shoot the image going across in rows from top down going from left to right. Even though the image capture sequence would only take an hour or so I have found that shooting in this sequence makes for a more natural looking image in case of any change in lighting conditions. I also included a 1-second pause between the GigaPan head moving and the trigger of the camera to reduce any shake that may have been present from the pano head moving.
I had to go at it a few times but the final image was taken with 960 photos with each one having a .RAW file size of over 60MB.
Two Image Sets: Each day of the shoot presented itself with different problems. One day the city was clear but the horizon and volcanoes were obscured with clouds. On another day the horizon was totally clear. I decided to create two different image sets and combine them together to make the final image. One large image set was used for the clear sky and volcanoes another image set was used for the city.
Pre-Processing: For the horizon and volcanos I selected an image that I felt represented an average exposure of the sky into photoshop and corrected it to remove any vignetting.
For the image set of the city found an exposure of the city and color corrected and sharpened it to the way I wanted it before bringing the images into the stitching software. I recorded the image adjustments that I made and made a photoshop droplet with them. I then dragged and dropped all the files onto the droplet and let it run, automatically correcting each image of the photo sequences. It took a long time but it worked.
Autopano Giga: After the images were captured I put all of them into Autopano Giga. Autopano is a program that uses something called a scale-invariant feature transform (SIFT) algorithm to detect and describe local features in images. These features are then matched with features in other frames and the images are combined or stitched together. The software is pretty straightforward but I did a few things to make the final image.
Anti-ghosting: Autopano has something called an “anti-ghosting” which designed to avoid blending pixels that don’t match. This is useful for removing half cars or half people that could show up in the image due to the movement of objects between frames.
Exposure blending – Just in case of any vignetting or differences in the lighting I used the exposure blend function in the software to even out the exposures and make a nice blend.
.PSB: .PSB stands for Photoshop Big. The format is almost identical to Photoshop’s more common PSD format except that PSB supports significantly larger files, both in image dimension and overall size.
More specifically, PSB files can be used with images that have a height and width of up to 300,000 pixels. PSDs, on the other hand, are limited to 2 GB and image dimensions of 30,000 pixels. This 300,000-pixel limit is the reason why the final image has a 300,000-pixel width. I could have made the image a little bigger but I would have had to use a .kro format and I’m not sure that I would have been able to successfully blend the two images (one for the horizon and one for the city) together.
Computer: To stitch the .PSB together I used a laptop. I was worried that my laptop wouldn’t have enough horsepower to get the job done but it worked. The computer I used had the following specs: MacBook Pro (Retina, 15-inch, Mid 2015), 2.8 GHz Intel Core i7, 16GB 1600 Mhz DDR3, AMD Radeon R9 M370X 2048MB.
Hard Drive: The important thing to know when processing gigapixel images is that due to the large sizes of the images the processor speeds and RAM don’t really matter that much.
Since the processor cache and RAM fills up pretty quick when processing an image of that size the software directs everything to the hard drive where it creates something called a “page file” or “swap file” A page/swap file is a reserved portion of a hard disk that is used as an extension of random access memory (RAM) for data in RAM that hasn’t been used recently. By using a page/swap file, a computer can use more memory than what is physically installed in the computer. However, if the computer is low on drive space the computer can run slower because of the inability of the swap file to grow.
Since everything is happening on the hard disk it is really important to not only have a hard drive that is fast, but also one with a lot of space since it fills up really fast and won’t process the image if there isn’t enough space available since the swap file size can get gigantic. To process the Quito image I tried to use a fast PCI SSD that had around 500GB of space to process the image but the drive filled up. I took the computer back and got one with a 1TB PCI SSD and it was able to process the image.
Photoshop: I had to stitch one image for the horizon and another image for the background. Once these were done I opened them up in photoshop and used the eraser tool set to a large diameter to manually tool to manually blend them together. I then flattened the image and saved it as a .PSB file.
Image Tiling: I used a program called KRPano to make a tile of the images. If I uploaded the resulting .PSB file to the internet it would take forever for it to load up so people could see it. KRPano divides up the image into layers of small tiles. Each image you see is made up of a low-resolution tile. As you zoom into the image different small image tiles are quickly loaded and displayed with allows people to quickly view and explore the image without having to load the entire image. About 174,470 tiles were created for this image.
Once all the image tiles were created I compressed them into a .zip file. I felt that it would be easier to upload one large file instead of over 174,000 separate small files. The image upload went fine and I manually unzipped the image inside of the Hostgator server using FileZilla. It is good to check with the hosting company to make sure that they allow files to be unzipped inside their servers.
Once the image was created, tiled and uploaded I made a simple website and embedded the .html file into an iframe so It could be displayed.
I hope that this little guide proves helpful for all of you. Gigapixel technology is really interesting and fun to try out. I have done quite a few gigapixel images but am by no means an expert and am always interested in learning more.
Dior’s FW18 Collection is Fiercely and Unapologetically Female
Dior’s FW18 collection is an empowered renegade of pro-woman propaganda. Taking us back to the late ‘60s, designer Maria Grazia Chiuri boldly immerses this new collection into full revolution mode, using florals and distinct construction techniques to embody a renaissance of trend, and more importantly, historical conversations surrounding the inequality of women in society.
A sweater emblazoned with ‘C’est non, non, non et NON!’ opens the show, with a runway entrance largely readin…
Workshop: Rosanne Olson on Analyzing—and Recreating—Every Kind of Light
Commercial and fine-art photographer Rosanne Olson recalls that when she started her career as a newspaper photographer, “I knew nothing about lighting.” Everything changed when she took a lighting workshop with Gregory Heisler, who taught her and other students “to work simply and with minimal lighting equipment,” and to blend strobe with ambient light. Olson says she brings those principles, along with her 30+ years experience in the business, to her own students. Olson will lead the Santa Fe Workshops’ “ABCs of Beautiful Light” workshop from July 8-13. Here is what she says about her upcoming workshop, which will take place in Santa Fe:
“My goal in teaching is to really lead students to analyze the light in every image they see. They do this by evaluating the shape of the catch lights, the degree of hardness or softness of light (sun vs shade for example; and soft box vs grid), and the height of the light and where the resulting shadow falls. When photographers learn this kind of analysis, they can light intelligently, i.e., not just moving around lights but by understanding what each decision means and what effect it will have.
“Students learn to analyze tearsheets from books and magazines and what makes that light (sun, strobe, shade, etc.) I often use Irving Penn’s work, for instance, because I love it and it is great for teaching how to use light simply to create strong portraits. We put [theory] into practice almost immediately, beginning with natural light plus fill, then work with continuous artificial sources and finally with strobes, learning to combine strobe with ambient light. Students learn the subtle language of light and fill and the difference that even small changes can make to create emotional impact in an image. Even seemingly unimportant things, such as the use of a fill card, can make a big difference.
“One exercise I give my students is to create an exact replica of an image that they like. It really helps deepen the sensitivity toward lighting that we see everywhere, in every photography, painting and movies.
“Here is an example of a replication that a student (Ulrica Lindstrom) did from a photo of Yul Brynner [shown above right; photographer unknown]. She analyzed the lighting in the original photo and then tried to recreate the image using a model (her husband). She sketched her lighting diagram, indicating the position of and the kind of lights she used. [The exercise] requires awareness of light height, quality, positioning of the model, lighting the background, etc.
“I try to encourage in my students a sense of curiosity about the light in the world around us: Examine how images move us and why. Examine how cinematography creates a sense of romance or dread. Look at catch lights in your fellow human. What is it that makes that light shine? It’s really like learning a new language—suddenly your ear (or eye) is open to the world in a whole new way.”
Sony Shows Off the First Smartphone Camera with ISO 51200
Not content with creating low-light monsters in the world of interchangeable lens cameras, Sony has created a new smartphone dual camera that can shoot photos at a whopping ISO 51200 and videos at ISO 12800.
The technology is being showcased at the 2018 Mobile World Congress, which kicked off yesterday.
“At Sony, we know how important a sophisticated camera is in a smartphone, so we have continued to push the boundaries of what is possible,” the company says. “With a newly developed dual camera and Sony’s newly developed fusion image signal processor which makes real-time processing possible, it allows image capture in extremely dark conditions.”
“Ultra-high sensitivity will allow users to take brighter images with less noise and less blur” with low-light abilities that are currently only possible with interchangeable lens cameras.
Here’s an example Sony made on the difference between what the human eye can see at ISO 1600 and what its new dual camera can capture in video at ISO 12800:
Sony also showed off how ISO 51200 photos can “see the unseen”, capturing scenes where the human eye can only see mostly darkness:
You can watch the brief sneak peek for yourself at 20m53s into this 24.5-minute press conference:
No word yet on when we might see these dual cameras show up on a Sony smartphone.
Are you a freelance photographer like myself? Have you already put up your masterpiece on 500px? Maybe you’re trying to share your photos and sell them at the same time in case some stranger admires your work? If you’ve answered YES to all these questions, I’d like to share the terrible experience I had with 500px.
First off, by choosing to sell your work on 500px, you must agree to all the terms and conditions specified in a document called the Contributor Agreement. Then, you will see a list of prices next to each picture indicating how much each license costs if sold.
Of course, I’m completely okay with 500px earning some sort of commission in order for the platform to function properly. However, the real problem for me lies in a paragraph written in the Contributor Agreement.
Company (and its Distributors) shall have complete and sole discretion regarding the terms, conditions and pricing of Selected Images licensed to customers without the need for any consultation with Contributor. Company and its Distributors may enter into licensing arrangements for a quantity of Images, and may need to calculate royalties based on a ratio of Contributor Images licensed to the total number of Images licensed.
At first glance, this agreement didn’t present any issues for me, but unfortunately that all changed after a recent experience. When I reviewed my sales history, I realized that I’ve fallen into a trap by signing onto the Agreement.
Two pictures of mine (named “Chrono Cross”) were sold for three different licenses.
A $149 Large picture was sold for just $27, a $299 Unlimited Print was only sold for $3.96, and a Products for Resales purchase that was priced at $748 only earned me $8.
($27 + $3.96 + $8) x 2 = $77.90.
$77.90 was my total pay for the three different licenses for each photo that were priced at a total of $2,392. Payment was done through PayPal, and my total take-home earnings were $54.54! As you can see, the price gap was quite significant.
The Agreement does not specify the selling price and royalty ratio. In regard to setting prices for buyers, 500px has 100% control! Below is the response I received from 500px after I filed a complaint:
Some clients come to us asking to purchase a large volume of images so our sales team will often negotiate a discount; other times we offer promotional pricing to incentivize new buyers. Being flexible with our pricing gives us leeway to entice more clients to do business with us, clients that will pay full price the next time they make a purchase, and the time after that! As a business, this is why we need full discretion over pricing and ask that our contributors trust us to manage their sales effectively. We value all contributing photographers and we are proud to license your work.
After reading this reply, it’s difficult to see that the hard work and dedication of a photographer goes unnoticed because 500px decided to sell my pictures like scrap papers! I feel as if I were a garment factory worker being squeezed dry by the fashion industry, or a coffee bean farmer being treated unfairly by a coffee company.
Second, I do not know the actual price my photos were sold for or the percentage I was paid in royalties! All I saw was that the price list showed a much, much higher value than what I actually got paid.
After emailing them multiple times, I received the same response every time. 500px kept on telling me that with the Agreement in hand, they had done nothing wrong.
From the morality standpoint, I believe the Contributor Agreement is extremely unreasonable. I’ve already demanded that 500px remove all my photos from their site. To all the freelance photographers out there, think twice before you sign that 500px Contributor Agreement.
About the author: Ajax Lee is a fashion photographer based in Taiwan. The opinions expressed in this article are solely those of the author. You can find more of his work on his website, Facebook, and Instagram.
The first prize in the Professional Landscape category was awarded to Burmese photographer Zay Yar Lin for this photo titled “Sun’s Up, Nets Out“:
“An Intha fisherman sets up his net to fish as he paddles his boat with a unique leg-rowing technique in Mayanmar’s Inle Lake,” the description reads.
The rules of SkyPixel 2017 state that (1) “Photos must be from 2017”, and (2) “Photographs from any aerial platform are welcome.”
It seems that Lin’s photo doesn’t meet either of those conditions… unless standing at a high location while holding a DSLR can be considered shooting with an “aerial platform.”
PetaPixel was informed by a tipster that the photo had previously been submitted to prestigious awards in the past few years, a fact that SkyPixel was apparently unaware of.
Lin’s photo page on National Geographic’s Your Shot lists the EXIF details for the photo. The photo was captured on December 26th, 2014, using a Nikon D750 DSLR. The following year, Lin submitted it to the Nat Geo 2015 Traveler Photo Contest.
Samsung Galaxy S9 and S9+ Boast the First Dual Aperture Lens
Samsung has just announced the S9 and S9+ smartphones, which feature what Samsung says is the company’s most advanced camera ever. The low-light camera is the first in the smartphone industry to use a dual aperture lens.
“Good lighting is the secret to any great photo,” Samsung says. “But often, photos are taken in less-than-ideal lighting conditions and most smartphone cameras have a fixed aperture that can’t adjust to low or bright lighting environments resulting in grainy or washed out pictures.”
Samsung’s solution to this problem is a new camera that operates more like a human eye, which expands and contracts its iris. The new Dual Aperture lens can toggle between f/1.5 and f/2.4 depending on the situation.
The lens “lets in more light when it’s dark and less light when it’s too bright, taking photos that are crisp and clear anytime, anywhere,” Samsung says.
The S9 features a single 12-megapixel rear camera, while the S9+ uses a dual camera setup (the wide-angle dual aperture lens 12MP camera and a 12MP telephoto camera). Both phones have Dual Pixel wide angle sensors, optical image stabilization, and both feature an 8-megapixel front camera.
Other camera-related features of both cameras include 960fps super slow-mo, Motion Detection (auto-recording when motion is detected in the frame), portrait mode (on the S9+), and combining up to 12 distinct photos into a high-quality photo.
Non-photo features include AR Emoji, augmented reality with Bixby (Samsung’s intelligence platform), AKG stereo speakers, surround sound, edge-to-edge displays (5.8in on the S9 and 6.2-in on the S9+), water/dust resistance, wireless charging, memory that’s expandable to 400GB, and biometric authentication (iris, fingerprint, facial).
The Samsung Galaxy S9 and S9+ will be available in three colors (Lilac Purple, Midnight Black, and Coral Blue) starting on March 16, 2018, with price tags of $720 and $840 (respectively) when purchased unlocked.