3

I need to be able to convert 8-bit or 16-bit grayscale pixel data into a file format that the .NET framework can support.

The data I have available is the width, height, orientation (bottom-left) and the pixel format as 4096 shades of gray (12-bit resolution) packed in 2 bytes per pixel.

So for example each pixel ranges from 0 to 4096, and each pixel is 2 bytes.

I have already tried using PixelFormat.Format16bppGrayScale with the Bitmap constructor, and it throws a GDI+ exception. Everything I have read says that this format is not supported and that MSDN is wrong.

I want to convert this pixel buffer into a .NET Bitmap format (such as Format32bppArgb) with as little image quality loss as possible.

Anyone know how?

4

3 回答 3

6

See the example below, which precomputes a lookup table (LUT) and uses that to convert each pixel. This version covers your 12-bit case; for 8-bit the code is very similar, but it is difficult to generalize across pixel formats.

A conversion from 12-bit GS to effectively 8-bit GS will lose data. However, you can adjust the LUT table to focus on a smaller range of input values with better contrast (ex. DICOM Window Center/Window Width).

class Program
{
    static void Main( string[] args )
    {
        // Test driver - create a Wedge, convert to Bitmap, save to file
        //
        int width = 4095;
        int height = 1200;
        int bits = 12;

        byte[] wedge = Wedge( width, height, bits );

        Bitmap bmp = Convert( wedge, width, height, bits );

        string file = "wedge.png";

        bmp.Save( file );

        Process.Start( file );
    }

    static Bitmap Convert( byte[] input, int width, int height, int bits )
    {
        // Convert byte buffer (2 bytes per pixel) to 32-bit ARGB bitmap

        var bitmap = new Bitmap( width, height, PixelFormat.Format32bppArgb );

        var rect = new Rectangle( 0, 0, width, height );

        var lut = CreateLut( bits );

        var bitmap_data = bitmap.LockBits( rect, ImageLockMode.WriteOnly, bitmap.PixelFormat );

        ConvertCore( width, height, bits, input, bitmap_data, lut );

        bitmap.UnlockBits( bitmap_data );

        return bitmap;
    }

    static unsafe void ConvertCore( int width, int height, int bits, byte[] input, BitmapData output, uint[] lut )
    {
        // Copy pixels from input to output, applying LUT

        ushort mask = (ushort)( ( 1 << bits ) - 1 );

        int in_stride = output.Stride;
        int out_stride = width * 2;

        byte* out_data = (byte*)output.Scan0;

        fixed ( byte* in_data = input )
        {
            for ( int y = 0; y < height; y++ )
            {
                uint* out_row = (uint*)( out_data + ( y * in_stride ) );

                ushort* in_row = (ushort*)( in_data + ( y * out_stride ) );

                for ( int x = 0; x < width; x++ )
                {
                    ushort in_pixel = (ushort)( in_row[ x ] & mask );

                    out_row[ x ] = lut[ in_pixel ];
                }
            }
        }
    }

    static uint[] CreateLut( int bits )
    {
        // Create a linear LUT to convert from grayscale to ARGB

        int max_input = 1 << bits;

        uint[] lut = new uint[ max_input ];

        for ( int i = 0; i < max_input; i++ )
        {
            // map input value to 8-bit range
            //
            byte intensity = (byte)( ( i * 0xFF ) / max_input );

            // create ARGB output value A=255, R=G=B=intensity
            //
            lut[ i ] = (uint)( 0xFF000000L | ( intensity * 0x00010101L ) );
        }

        return lut;
    }

    static byte[] Wedge( int width, int height, int bits )
    {
        // horizontal wedge

        int max = 1 << bits;

        byte[] pixels = new byte[ width * height * 2 ];

        for ( int y = 0; y < height; y++ )
        {
            for ( int x = 0; x < width; x++ )
            {
                int pixel = x % max;

                int addr = ( ( y * width ) + x ) * 2;

                pixels[ addr + 1 ] = (byte)( ( pixel & 0xFF00 ) >> 8 );
                pixels[ addr + 0 ] = (byte)( ( pixel & 0x00FF ) );
            }
        }

        return pixels;
    }
}
于 2011-08-02T18:08:33.443 回答
1

Spoof a 16b format & use ColorMatrix to map it correctly before display.

I haven't done performance tests of this approach on Windows, but on other platforms (e.g., Android) where I needed efficient memory storage and rapid remapping of different ranges in the 12b or 16b data I've made good use of this technique.

I tell it my 12/16b grayscale data is really RGB565, so that it's happy serializing, deserializing & other manipulations. When I need to display I pass it through a ColorMatrix which maps the appropriate window to an 8b grayscale in ARGB8888.

If anyone wants to try this I'll post my mapping algorithm.

于 2012-08-22T15:17:55.917 回答
0

Two possible ways:

  • Use Bitmap constructor pointing to an arbitrary buffer. This requires you to keep the buffer around until the Bitmap is disposed, but does prevent unnecessary copying of the bitmap data in memory.
  • LockBits method can be used to get a pointer to the Bitmap's data. In this case, construct a Bitmap as usual with the desired dimensions and format. Then call LockBits and copy the bitmap data into the buffer. This would be slower, but necessary if your data is not in a format that the Bitmap constructor can directly accept and therefore requires some kind of custom conversion.
于 2011-07-26T16:19:15.767 回答