Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lens shading control #470

Open
wants to merge 20 commits into
base: master
Choose a base branch
from
Open

Lens shading control #470

wants to merge 20 commits into from

Conversation

rwb27
Copy link

@rwb27 rwb27 commented Feb 9, 2018

This PR adds:

  • Updates to picamera.PiCamera that:
    • Make PiCamera.analog_gain writeable
    • Make PiCamera.digital_gain writeable
    • Add a new property PiCamera.lens_shading_table that allows setting of the camera's lens shading compensation table.
  • Requirements to enable the above features:
    • A Python header conversion for user_vcsm.h and an object-oriented wrapper in the style of mmalobj that makes it possible to work with VideoCore shared memory from Python
    • Updates to the mmal library with the new constants added in late 2017 to the userland code that enable setting the gains directly and manipulating lens shading correctin

The module will run fine with older versions of the userland code, but will throw an exception if you try to set analog or digital gain, or use the lens shading table. I guess that makes it a "soft" dependency? The features were introduced late 2017 in a commit.

I thought passing in the lens shading table as a numpy array made good sense, but I have been fairly careful to avoid introducing any hard depndencies on numpy, having read the docs on picamera.array and assumed that this would be desirable.

I have tried to keep things like docstrings and code style consistent, but please do say if I can tidy up my proposed changes.

rwb27 and others added 13 commits January 25, 2018 15:05
NB I've not yet added the new datatypes for e.g. lens shading.  However, I have wrapped analog and digital gain so that you can set them.
Wrapped the videocore shared memory functions needed for lens shading (not the whole file.  Is there a script that does this??)
mmal.h is not documented, so probably this needn't be either.  However, I thought it was worth at least adding a link to the C header I wrapped (which has extensive comments).
I've not tested this yet!!!
Should now be complete...
@rwb27
Copy link
Author

rwb27 commented Feb 9, 2018

PS this includes the changes in my other PR #463 so I will close it now.

@rwb27 rwb27 mentioned this pull request Feb 9, 2018
@dhruvp
Copy link

dhruvp commented Feb 17, 2018

@rwb27 thank you so much for putting this up! I was just looking for something exactly like this. Is there any place you could show an example of loading in a lens shading table and initializing the camera with it? It would be helpful to see the format in which the lens shading table needs to be loaded and passed in.

@rwb27
Copy link
Author

rwb27 commented Feb 19, 2018

No problem. I have some code that does exactly that as part of my microscope control scripts but I will try to chop it out into a stand-alone script.

The basic principle is quite simple though: the array should be a 3-dimensional numpy array, with shape (4, (h+1)//64, (w+1)//64) where w, h are the width and height of the camera's full resolution. I've not extensively tested how this varies with video mode; I have always just used the maximum resolution, i.e. 3280x2464 for the camera module v2. The 4 channels correspond to Red, Green1, Green2, Blue gains, green appears twice because there are two green pixels per unit cell in the Bayer pattern. The other two dimensions correspond to position on the image - NB that it's height then width rather than the other way around.

You can either pass your numpy array to the camera's constructor (cam = picamera.PiCamera(lens_shading_table=myarray)) or simply set cam.lens_shading_table to the array. Note that doing the latter reinitialises the camera (like changing sensor_mode or resolution) so the constructor method is more efficient.

A complete example is below. This will set the camera's lens shading table to be flat (i.e. unity gain everywhere).

from picamera import PiCamera
import numpy as np
import time

with PiCamera() as cam:
    lst_shape = cam._lens_shading_table_shape()

lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain

with PiCamera(lens_shading_table=lst) as cam:
    cam.start_preview()
    time.sleep(5)
    cam.stop_preview()

I should probably put this in the docs somewhere...

@dhruvp
Copy link

dhruvp commented Feb 19, 2018

This is amazing - thank you so much for putting all this together. As a last clarification, are you sure the channel order should be [R, G1, G2, B]? I was looking through userland's lens_analyze script and it seems that script outputs in the order of [B, Gb2, Gb1, R]. At least that's what it looks like in my ls_table.h file after running their script.

Thanks!

@rwb27
Copy link
Author

rwb27 commented Feb 20, 2018

hmm, you may be correct there - that would explain a few things. I think the middle ones are probably both green but I may have R and B swapped, it's possible that my code that generates the correction from a raw image has the channels swapped somewhere else. If you're able to test it before I am, do let me know. Bear in mind that white balance is applied after the shading table, so it's not quite as simple as just changing the average values for different channels.

@dhruvp
Copy link

dhruvp commented Feb 20, 2018 via email

@quetzacoal
Copy link

Hello Richard,

I find your Lens shading control extremely useful, problem is that I'm not an expert in programming and I'm not able to follow your requirements to enable it.

Would it be possible to get a tutorial on how to install it? Is there a package I can download and install?

Thanks, Marc

@rwb27
Copy link
Author

rwb27 commented Jun 21, 2018

Hi Marc,
that's a good point - I've tried to keep the fork "clean" to make it easy to pull back into the main PiCamera release. I do, however, have better instructions for how to install the software for the OpenFlexure Microscope which includes installing this fork. In short, you can install it with:

sudo pip install https://github.com/rwb27/picamera/archive/lens-shading.zip

The only requirement you should need to upgrade is the “userland” libraries on your Raspberry Pi, which you can do using the rpi-update command. However, the version that ships with the latest Raspbian image is already new enough, so if burning a new SD card is simpler, you can just do that.
If you are getting an error when you run the code above relating to _lens_shading_table_shape() it is unlikely to be due to missing requirements – that suggests to me that the module hasn’t been installed properly. Perhaps you could try the command above and check it completes successfully - with any luck that should solve the problem...

@rwb27
Copy link
Author

rwb27 commented Jun 21, 2018

Oh, and while I'm here, for those of you interested in calibrating a camera, I've now written a closed-loop calibration script that works much better than my first attempt (which ports 6by9's c code more or less directly). I guess there must be something nonlinear that happens in the shading compensation - I have not figured out what it is, but 3-4 cycles of trying a correction function and tweaking it seems to fix things. It's currently on a branch, but I'll most likely merge it into master soon, here's a link to the recalibration script.

@quetzacoal
Copy link

Incredible! I managed to install your OpenFlexure microscope control with your installation guide. I also ran one of your examples and worked perfectly. Now I was trying to use your recalibration script but it's telling me I need the microscope library... can I find it in one of your repositories or should I look somewhere else? Thanks

@rwb27
Copy link
Author

rwb27 commented Jun 22, 2018

Excellent, glad that worked! If you've installed the openflexure_microscope library, it's best to run it from the command line. It will try to talk to a motor controller on the serial port by default, but there's a command line flag to turn that off. You can use:

  • openflexure_microscope --no_stage to run the camera with manual control of gain, exposure speed,...
  • openflexure_microscope --recalibrate to recalibrate the lens shading table so that the image is uniform and white.
    If that doesn't work (probably because the command line entry points weren't installed), try replacing openflexure_microscope with python -m openflexure_microscope.

If you are running the python script directly, it might get confused about relative imports (because it's designed to be part of the module) - that is probably where the error comes from about the microscope library (it is in microscope.py in the openflexure_microscope module that you have already installed.

I should probably figure out a way to crop out the camera-related parts of this, but if you look in the relevant Python files you can probably figure out what's going on - or just use it through the openflexure_microscope module if that's easier. The important point to understand is that the recalibration routine saves a file (microscope_settings.npz) in the current directory, and that is loaded by default to set up the microscope. You can open that file with numpy to inspect its contents; the lens shading table will be in there.

Hope that helps...

@quetzacoal
Copy link

Ok, I understood everything now. The program works even better than I expected!

I don't know how can I repay you, thanks!

codeTom and others added 2 commits July 2, 2018 12:33
several python3 related fixes:
ctype char * now requries bytes (was string)
some calculations now seem to need int()
@waveform80
Copy link
Owner

Okay, I've finally had time to review this now and it'll definitely be going into 1.14 but I am going to make some alterations. The major one is I'm not entirely happy depending on numpy for the table and I don't think it's necessary - i.e. we can simply require that whatever is passed in for the table implements the buffer protocol (which numpy arrays do, so this doesn't mean you can't use them - you can - but it'll mean numpy isn't absolutely required for it). Basically I'll make it similar to add_overlay.

Incidentally, we can still have all the checks about correct shape, stride, order, etc. as the memoryview interface implements all of that too (well ... most of that in 2.7, all of that in 3.3 onwards so I'll need to throw some backward compat workarounds in there, but that's fine).

Anyway, other than that the rest is looking great! I've yet to read through the whole thread above but it looks like there might be some useful snippets there for examples in the Advanced Recipes section of the manual so I'll try and get through those too this week.

@rwb27
Copy link
Author

rwb27 commented Jan 9, 2019

@cpixip I've just run a slightly more in-depth calibration routine on the v2 camera, which calculates a full colour-unmixing matrix for each position on the sensor. That means that, if you're prepared to do post-processing on the images (or implement some sort of super exciting GPU-accelerated rendering) it's possible to completely compensate the effect of the lenslet array. The only penalty is an increase in noise, of 2-3x at the edges of the image. Of course, as you say, the lens shading correction that's built in to the camera pipeline doesn't do this, which means you will always lose saturation towards the edges of the image.

I'm currently tidying up my analysis and will write a report, which I'll share soon I hope!

@cpixip
Copy link

cpixip commented Jan 9, 2019

@rwb27 - wow, cool. It will be interesting to see the results. And yes, you are right - ideally you want to do it on the GPU, otherwise the computations will be probably too time-consuming. Some years ago I did implement some image processing algorithms for ultrasound images on a PC graphics card. Basically by directly programming the algorithms into vertex- and fragment-shaders (no CUDA or so). But that was years ago (so I forgot most of that stuff, I am afraid) and I do not know whether the Raspberry hardware does support easy access to shaders or has sufficient processing power to handle such an approach. I have seen approaches to this problem where not a full decorrelation matrix is stored for each pixel, but only a functional description described by a few components (which would take substantially less texture memory to implement). Thinking about it, GPU-hardware should be well suited for the task at hand - multiplying the original RGB-signal with a position-dependent 3x3-matrix.

In any case, I am really curious how your approach works and how good the results are. And probably a few other people are interested in this too... - so please share your report if possible!

@cpixip
Copy link

cpixip commented Jul 27, 2019

Hi rwb27 (and others),

I just tried out your lens-shading algo on a new Raspi 4/2GB and it froze on this line:

with PiCamera(lens_shading_table=table) as camera:

Also tried to set the table directly, as in
camera.lens_shading_table = table

and it failed as well at that point. Tested with a v1-camera.

Both codes work on older raspi hardware, like a raspi 2 or 3. Did anyone else succeed in getting this to run on newer raspi-hardware?

@Tim-Brown-NZ
Copy link

Hi rwb27 (and others),

I just tried out your lens-shading algo on a new Raspi 4/2GB and it froze on this line:

with PiCamera(lens_shading_table=table) as camera:

Also tried to set the table directly, as in
camera.lens_shading_table = table

and it failed as well at that point. Tested with a v1-camera.

Both codes work on older raspi hardware, like a raspi 2 or 3. Did anyone else succeed in getting this to run on newer raspi-hardware?

Hi, I have a similar problem, when trying to run the code on a Pi 3 which has been upgraded to the latest release of the OS.
That is, if I do:
sudo apt-get update
sudo apt-get upgrade

Then loading the lens shading table takes a long time and eventually, usually, comes back with a timeout and buffer size error. During this time the preview window is not (cannot?) be shown.

Perhaps an OS change to support the Pi4 has broken something?

@rwb27
Copy link
Author

rwb27 commented Aug 12, 2019

Hi @cpixip @TimBrownConsulting we've had that issue too. It relates to a recent update to the GPU firmware that runs the camera, specifically the auto-exposure algorithm (which has been replaced with a newer, fancier version). We (by which I mean @jtc42) opened an issue upstream on the firmware repo, which has been fixed - but there's another issue (relating to the white balance gains) that means our calibration still goes wrong. The work-around for now is to use the debug mode (helpfully referenced in the first issue thread) to disable the new behaviour and revert to the old auto-exposure algorithm. It's not 100% satisfying but works for now, and hopefully we can work with the firmware developers to sort it out in the new version. @jtc42 is away this week, but I'm sure he'll comment here once he returns.

@cpixip
Copy link

cpixip commented Aug 12, 2019

@rwb27 , @TimBrownConsulting - Hi everybody. Just wanted to confirm that the magic
"sudo vcdbg set awb_mode 0"-command does the trick with the new raspi4 firmware. Great! A noticeable speed-up can be observed in serving frames from the new hardware. Thanks everybody!

@iHD992
Copy link

iHD992 commented Aug 30, 2019

@rwb27 I have some questions. What are the possible values I can put into analog_gain and digital_gain? How is the iso calculatet? analog_gain * digital_gain * 100 and then heavily rounded? What is the highest iso I can get this way using V1 or V2?

@rwb27
Copy link
Author

rwb27 commented Sep 2, 2019

@iHD992 I believe the sensible values range from below 1 up to about 4, but I don't remember ever actually reading minimum or maximum values. What I can say is that, by and large, it's pretty safe to experiment by writing a value, then reading it back a second later (the delay is important). If you try to set it to an invalid value, it will either raise an error, or the value you read back will not be the same as the one you wrote.

Calculating ISO is neither trivial nor linear! I would have to do some googling on that point, but I'm pretty sure the answer is very much not as simple as the calculation you suggest. Setting the ISO value is only meangingful if you're using auto-exposure, and generally if you're setting gains manually, you are probably also setting the exposure time manually. I believe asking for ISO 100 will tend to use a relatively low value of analogue gain - but this isn't necessarily the lowest possible gain on the v2 camera module, as it was deliberately set to be consistent with the v1. I am not the authority on ISO numbers though, I'd keep googling because I know there are some discussion threads where people go into some detail.

@Tim-Brown-NZ
Copy link

@iHD992 I've played with this a bit since for my application I have to do my own "auto exposure". I am working on a digital microscope so for different samples I set the image exposure using the gains to get close to what I want and then vary the intensity of the lighting to get the "right" exposure. For what it's worth the numbers I use for the gains are:
AnalogGains = [8, 8, 8, 8, 8, 8, 6.4, 5.1, 4.1, 3.3, 2.6, 2.1, 1.7, 1.4, 1.1, 1, 0]
DigitalGains = [4, 2.8, 2, 1.4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
As you will see rather than try to use a ratio between the two numbers I use Analog gain until I get up to 8 and then digital after that. In practise in my application I rarely set analog more than about 3.3 (and therefore digital 1.0) . I think these numbers are actually stored internally as fractions so it's possible the number reported back will vary very slightly from the number set.

@iHD992
Copy link

iHD992 commented Sep 5, 2019

@TimBrownConsulting Do you have V1 or V2 of the camera? Do you know how these values correspond to the registers in V1?

@Tim-Brown-NZ
Copy link

@iHD992 I'm using a V2 camera.
On this page: https://picamera.readthedocs.io/en/release-1.13/_modules/picamera/camera.html

    _On the V1 camera module, non-zero ISO values attempt to fix overall
    gain at various levels. For example, ISO 100 attempts to provide an
    overall gain of 1.0, ISO 200 attempts to provide overall gain of 2.0,
    etc. The algorithm prefers analog gain over digital gain to reduce
    noise.

    On the V2 camera module, ISO 100 attempts to produce overall gain of
    ~1.84, and ISO 800 attempts to produce overall gain of ~14.72 (the V2
    camera module was calibrated against the `ISO film speed`_ standard)._

@prayuktibid
Copy link

Is forked picamera library not working on raspbian Jassie ? In my case it is not working even after updating the rpi- firmware?

@rwb27
Copy link
Author

rwb27 commented Sep 9, 2019

Hi @prayuktibid what error are you getting? I haven't tested it on Jessie, only on Buster and Stretch. It does rely on relatively recent userland libraries, so it is possible that the lens shading part won't work on Jessie, but I can't see why the upstream version wouldn't work - and my fork shouldn't break that, as far as I can see.

Does the upstream (i.e. official) version of the library work for you?

What error do you get when you try to use the fork?

@prayuktibid
Copy link

Thank You Mr. Richard @rwb27 for your reply.
The official picamera library working on Rpi 2 Model B with Jassie, no issue with it.
But when I am using the forked library on Rpi 2 Model B with Jasiie it's not working. Aslo it is not throwing any exception when I including the lens_shading_table.npy, it's just couldn't open the camera only.
The same thing happened with Rpi 3 and stretch but after Rpi firmware update shading table working flawlessly, that's why I have updated the Rpi2 model B firmware but no change.

@iHD992
Copy link

iHD992 commented Sep 9, 2019

Maybe Jessie did not get the necessary update for the “userland” libraries.

@prayuktibid
Copy link

@iHD992 Although I have updated with the latest Rpi firmware but still no progress.

Any suggestions? @iHD992 @rwb27

@marcodc-sys
Copy link

Hi there, i have installed the microscope software and everything goes okey. I have run the microscope --recalibrate and obtained microscope_settings.npz. Now I have to merge the camera lens shading table with the picamera library to run my own code. How can I do this ? I am getting crazy

@rwb27
Copy link
Author

rwb27 commented Jan 22, 2020

Hi there, i have installed the microscope software and everything goes okey. I have run the microscope --recalibrate and obtained microscope_settings.npz. Now I have to merge the camera lens shading table with the picamera library to run my own code. How can I do this ? I am getting crazy

Hi @marcodc-sys, the simplest way is to pass a lens_shading_table argument to the constructor of your PiCamera object. This can come directly from the numpy file, if you do:

Import numpy as np
settings = np.load(“microscope_settings.npz”)
with PiCamera(lens_shading_table=settings[“lens_shading_table”]) as cam:
    # use the camera

The other PiCamera settings are also saved in the npz file, which can be accessed in a similar dictionary-like way. There is a convenience function in the microscope software that you can import, that will simply take the settings file as an argument, and return a PiCamera object you can start using immediately.

I should also mention that the version of the microscope software on github is no longer what we are using - the new version is on gitlab, in the “OpenFlexure” organisation. However, that new version of the software is rather more complicated, so might not be as useful a resource.

@rwb27
Copy link
Author

rwb27 commented Jan 30, 2020

I've done some more work on this in my fork (a different branch) to make it merge-able, but have not yet tested it. The revised version is here:
https://github.com/rwb27/picamera/tree/master
Once we've tested it, it should be up to date with waveform80/master, and I'll update this pull request.

@rwb27
Copy link
Author

rwb27 commented Feb 18, 2020

I've now tested https://github.com/rwb27/picamera/tree/master which is up to date with waveform80/master. I've also added a test for the lens shading table property, which passes (although there are a few other failures on my system, which I don't think are due to my changes). I think it might make sense to start a new PR for that - though I may also merge the changes onto this branch, unless anybody would find that really annoying?

There is exactly one breaking change, which is that PiCamera.lens_shading_table now returns either a memoryview object or None, i.e. if you want a numpy.ndarray you will need to wrap it:

arr = np.array(cam.lens_shading_table)

This is to address @waveform80 's request to avoid baking numpy in too hard - you can still assign an ndarray to the property - because arrays implement the buffer protocol, this can be efficiently and silently converted into a memoryview object without any issues.

@rwb27 rwb27 mentioned this pull request Feb 18, 2020
@Weruminger
Copy link

Weruminger commented May 23, 2020

Hi
1st: you did a great job

I tryed to use the lens shading control for Astro longtime exposure.
using a paasive cooled PiCam NONIR V2 Chip with adaption:
Pi Cam Astro

but 'I've the issue that i can take one shot. all finisched and Python process is not longer in the process list, what is fine, but it hangs at the 2nd shot and i can not kill the python Procees but only reboot.

🟢 SOLVED: SOLUTION AT THE TAIL OF THIS COMMENT 🟢

the code i did use:

from picamera import PiCamera
import numpy as np
import time
from datetime import datetime
from fractions import Fraction
print 'init done'
with PiCamera() as cam:
    lst_shape = cam._lens_shading_table_shape()
print 'shape done'
lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain
print 'shape defined'
with PiCamera(lens_shading_table=lst) as cam:
    print 'cam opened'
    cam.resolution = (1296,976)
    print 'resolution set'
    cam.framerate = Fraction(1, 2)
    print 'framereate set'
    cam.shutter_speed = 2000000
    print 'shutter set'
    cam.exposure_mode = 'off'
    print 'mode set'
    cam.iso = 800
    print 'iso set'
    cam.capture('image${env.BUILD_NUMBER}.jpg')
    print 'capture done'

the result (unsharp flat with noir chip) looks fine:

image35

Any Idea ? here is the time corresponding messages Log

messages.log

❗ SOLUTION ❗

some changes in code by investigation, but the root cause was the value of cam.exposure_mode this shall not be off.

from picamera import PiCamera
import numpy as np
import time
from datetime import datetime
from fractions import Fraction

print 'init done'
with PiCamera() as cam:
    lst_shape = cam._lens_shading_table_shape()
print 'shape done'
lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain
print 'shape defined'
with PiCamera(lens_shading_table=lst,resolution = [1640,1232], sensor_mode = 4,framerate = Fraction(1, 3) ) as cam:
    print 'cam opened'
    cam.exposure_mode = 'verylong' # 'off'
    print 'mode set'
    cam.shutter_speed = 3000000
    print 'shutter set'
    cam.iso = 200
    print 'iso set'
    print 'timeout for 5s'
    time.sleep(5) # on timeout of 20 seconds the cam.exposure_speed value is set but that was a time i did not want to spend. 5Sec work fine in my case but may be changest in your adaption
    print cam.exposure_speed
    print cam.shutter_speed
    for cnt, _ in enumerate(cam.capture_continuous('image{counter:03d}.jpg', burst=True, format='jpeg', bayer=True, thumbnail=None, quality=60)):
        print 'start capture: {c:03d}'.format(c=cnt)
        if (cnt >= 4):
            break
    print 'capture done'
    cam.framerate = Fraction(1, 1)
    print 'timeout for 2s'
    time.sleep(2)
    print 'close cam'
    cam.close()
exit()

@Weruminger
Copy link

Weruminger commented May 23, 2020

🟢 SOLVED see previous comment.
for my eyes it seem to hang in a non interruptable loop for HW control.
could it be the mmal class ?

@Weruminger
Copy link

Weruminger commented May 23, 2020

🟢 SOLVED see previous comments.

My current installed modues are
Py_2.7_modules.txt
Py_3.7_modules.txt

Tests have been done in Python 2.7

@Weruminger
Copy link

Weruminger commented May 23, 2020

Another Question:
is there any "how to" or tutorial to create the shadingtables from a given RGB jpeg with RAW data like this?

🟢 SOLVED here 🟢
Sorry i did not recognized it at first read

image59

@abingham
Copy link

What's the status of this PR?

@rwb27
Copy link
Author

rwb27 commented Jan 14, 2021

we're currently maintaining a fork with this (and a few other) pull request that is now distributed on pypi as picamerax. We'd be delighted to get everything integrated back into the upstream project, as and when the maintainers have bandwidth to do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.