Solved

# raw image 10-bit to 16-bit

Posted on 2003-03-07
Medium Priority
711 Views
Hi

I have a raw image with only one channel (b/w) of 10-bit per pixel.
I need to convert the raw data to 16-bit per pixel.

Can I use a linear conversion or shall I use any formula to do that?

What I'm doing now is a linear conversion. OUT = IN * 65535/1023.

0
Question by:alfarod
[X]
###### Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

• Help others & share knowledge
• Earn cash & points
1 Comment

LVL 1

Accepted Solution

lordxeroth earned 80 total points
ID: 8088257

Look at it in this way: transform the 10 bit signal to an analogue signal (DA conversion) and then resample it at 16 bit (AD conversion). So the formula you gave is what you need. There is nothing fancy about it.

Ofcourse you must round the new value to the nearest integer and then everything is as fine as you could get it. Normaly you won't see the difference at all since the max error is 0.5 on 65535 or less then 0.001%

kind regards,
Lord Xeroth
0

## Featured Post

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Since upgrading to Office 2013 or higher installing the Smart Indenter addin will fail. This article will explain how to install it so it will work regardless of the Office version installed.
A short article about problems I had with the new location API and permissions in Marshmallow
Simple Linear Regression
Starting up a Project
###### Suggested Courses
Course of the Month11 days, 18 hours left to enroll