Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resulting color problem #41

Open
Tara-Liu opened this issue Dec 11, 2024 · 5 comments
Open

Resulting color problem #41

Tara-Liu opened this issue Dec 11, 2024 · 5 comments

Comments

@Tara-Liu
Copy link

According to the given test sample, the results of the results are very dark

image

@Jamy-L
Copy link
Owner

Jamy-L commented Dec 11, 2024

I am suprised to see such a dark image indeed. The results on the Ipol demo are brighter. I dont have acces to a GPU right now and the online demo appears to be down, could you provide the exact input command or script used to generate the image? It probably has to do with a parameter of the post-processing : In the meantime you can try to change or disable the color correction, gamma, and tonemapping

@Tara-Liu
Copy link
Author

I am suprised to see such a dark image indeed. The results on the Ipol demo are brighter. I dont have acces to a GPU right now and the online demo appears to be down, could you provide the exact input command or script used to generate the image? It probably has to do with a parameter of the post-processing : In the meantime you can try to change or disable the color correction, gamma, and tonemapping

python run_handheld.py --impath test_burst/Samsung --outpath output.png
This is my running command

@Jamy-L
Copy link
Owner

Jamy-L commented Dec 12, 2024

After looking in more details, I can safely say that this issue does not come from the super-resolution code, but rather from the post-processing that has been kept to the strict minimum to avoid making the code too heavy. If it is really an issue for you, you could disable the built-in postprocessing using the parameter dictionary:

params = {
  "scale": 2,  # choose between 1 and 2 in practice.
  "merging": {"kernel": "handheld"},  # keep unchanged.
  "post processing": {"on": True}  # set it to False if you want to use your own ISP.
  }

The output array can then be manipulated to perform any post-processing better fitted. The IPOL demo actually disables the built-in postprocessing and uses a different one, here is a sample of the code below:

 
      handheld_output, debug_dict = process(args.impath, options, params)
      handheld_output = np.nan_to_num(handheld_output)
      handheld_output = np.clip(handheld_output, 0, 1)
  
  
      ####  Read the reference ###
      print('Demosaicking using DCRAW and upscaling using nearest neighbor interpolation')
      ref_lr = rawpy.imread(os.path.join(args.impath, impaths[0]))
      ref_lr = ref_lr.postprocess(use_camera_wb=True, gamma=(1.0, 1.0), output_bps=16, output_color=rawpy.ColorSpace.raw, no_auto_bright=True)
      ref_lr = img_as_float32(ref_lr)
  
  
      #### Autbright the images - 0.5% of pixels will be saturated.
      # autobright = args.autobright
  
      handheld_output_small = cv2.resize(handheld_output, dsize=None, fx=1./8, fy=1./8, interpolation=cv2.INTER_LINEAR)
      handheld_output_small = np.mean(handheld_output_small, axis=-1) # grayscale
      nbins = 2048
      count, intensity = np.histogram(handheld_output_small.ravel(), bins=nbins)
      cumulative_count = np.cumsum(count)
      quantile = 1 - (0.5 / 100)
      threshold = count.sum() * quantile
  
      for n in range(nbins):
          if cumulative_count[n] > threshold:
              top_intensity = intensity[n]
              break
  
      autobright = 1. / top_intensity
  
      # autobright = args.autobright
      ref_lr *= autobright
      ref_lr = np.clip(ref_lr, 0, 1)
      handheld_output *= autobright
      handheld_output = np.clip(handheld_output, 0, 1)
  
  
      #### Postprocess the images.
      handheld_output_corrected = postprocess(handheld_output)
      ref_lr = postprocess(ref_lr)

    def postprocess(img):
        ## Gamma
        img = np.clip(img, 0, 1)
        img = np.maximum(img, 1e-8) ** (1./2.2)
        ## S-curve tone mapping
        img = 3 * img ** 2 - 2 * img ** 3
        ## Sharpening
        # img = polyblur.polyblur_deblurring(img, n_iter=3, alpha=args.alpha, beta=args.beta, c=args.c, b=args.b)
        img = filters.unsharp_mask(img, radius=args.radius, amount=args.amount)
        img = np.clip(img, 0, 1)
        return img

The results on the IPOL demo are indeed brighter and have more contrast, so this postprocessing may better suit your need. Alternatively, you could export the raw super-resolved image as .dng (as indicated in the readme), and perform professional-grade postprocessing on external softwares like the adobe suite

@Tara-Liu
Copy link
Author

Yes, I found that the super-resolved images are clearer, but the colors are incorrect. Additionally, I noticed that running python run_handheld.py --impath test_burst/Samsung --outpath output.png produces incorrect colors, while python example.py generates correct colors. This is most likely related to post-processing (I am using the release version of the code).
image

@Jamy-L
Copy link
Owner

Jamy-L commented Dec 16, 2024

I figured it out : run_handheld.py and example.py had different default postprocessing settings. The colour correction, gamma and tonemapping were disabled. You can enable them this way:
python run_handheld.py --impath test_burst/Samsung --outpath output.png --do_gamma true --do_color_correction true --do_tonemapping true

I will push a new release where they are on by default.

Let me know if it works for you 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants