Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot seem to create a High DPI window in MacOSX #140

Open
biglambda opened this issue Feb 27, 2017 · 30 comments
Open

Cannot seem to create a High DPI window in MacOSX #140

biglambda opened this issue Feb 27, 2017 · 30 comments

Comments

@biglambda
Copy link

biglambda commented Feb 27, 2017

I'm trying to create a high DPI window on MacOS but I seem to always get the larger resolution.
My window setup code looks like this:

setupSDL :: IO SDL.Window
setupSDL =
  do
    SDL.initializeAll
    version <-  SDL.version
    putStrLn $ "SDL Version: " ++ show version
    let windowConfig = SDL.WindowConfig
                      { SDL.windowBorder       = True
                      , SDL.windowHighDPI      = True
                      , SDL.windowInputGrabbed = False
                      , SDL.windowMode         = SDL.Windowed -- SDL.FullscreenDesktop
                      , SDL.windowOpenGL       = Nothing
                      , SDL.windowPosition     = SDL.Absolute (L.P $ L.V2 10 10)
                      , SDL.windowResizable    = True
                      , SDL.windowInitialSize  = L.V2 800 600
                      }
    window <- SDL.createWindow (Text.pack "MyApp") windowConfig
    config <- SDL.getWindowConfig window
    putStrLn $ "Flags: " ++ show config
    return window

I'm opening the code in with the 'open' command in terminal from a bundle. My info plist includes:

<key>NSHighResolutionCapable</key> <true/>
@matthewleon
Copy link
Contributor

@biglambda is this still in issue for you? For what it's worth, on MacOS 10.12.5 I have high DPI working even without the plist key. I can check, but I believe that while the window size doesn't change when I set windowHighDPI, the pixel dimensions of the GL surface it contains do double.

@biglambda
Copy link
Author

Hmmm... it is an issue, what version of the libraries etc are you using?

@matthewleon
Copy link
Contributor

SDL 2.0.5. The latest version on homebrew. Using the haskell sdl2 library from this repo.

@biglambda
Copy link
Author

Can I see your initialization code?

@matthewleon
Copy link
Contributor

Just using initializeAll. Nothing special. Using the following configuration for the window:

windowConfig :: WindowConfig
windowConfig = WindowConfig
  { windowBorder       = True
  , windowHighDPI      = True
  , windowInputGrabbed = False
  , windowMode         = Windowed
  , windowOpenGL       = Nothing
  , windowPosition     = Wherever
  , windowResizable    = True
  , windowInitialSize  = V2 800 600
  }

If I print out the dimensions returned by rendererViewport for that window, they will be a full 1600x1200.

@matthewleon
Copy link
Contributor

matthewleon commented Jul 25, 2017

Note that if I set windowHighDPI to False in the config above, then those dimensions will be 800x600. It is from this information, as well as the impression (if I get a moment I can take screenshots to compare) that the bilinear stretching when I stretch a texture looks nicer with windowHighDPI, that I conclude that the high DPI support is working.

@chrisdone
Copy link
Member

@biglambda can you show a screenshot of your window? Perhaps you were in high DPI the whole time but like #172 didn't know what to look for to see it's working.

@biglambda
Copy link
Author

image

@biglambda
Copy link
Author

biglambda commented Oct 20, 2017

This is my startup code:

startInterface :: Point2 IntSpace -> IO InterfaceState
startInterface screenSize =
  do  SDL.initializeAll -- [SDL.InitEvents, SDL.InitVideo]
      version <- SDL.version
      putStrLn $ "SDL Version: " ++ show version
      let windowConfig = SDL.WindowConfig
                        { SDL.windowBorder       = True
                        , SDL.windowHighDPI      = True
                        , SDL.windowInputGrabbed = False
                        , SDL.windowMode         = SDL.Windowed -- SDL.FullscreenDesktop
                        , SDL.windowOpenGL       = Just SDL.defaultOpenGL
                        , SDL.windowPosition     = SDL.Absolute (Point2 10 10)
                        , SDL.windowResizable    = True
                        , SDL.windowInitialSize  = V2 (fromIntegral . unISpace . unOrtho . pX $ screenSize)
                                                      (fromIntegral . unISpace . unOrtho . pY $ screenSize)
                        }
      window <- SDL.createWindow (Text.pack "Semblance") windowConfig
      config <- SDL.getWindowConfig window
      putStrLn $ "Flags: " ++ show config
      -------------------- Create Output Bitmap ------------
      surface <- SDL.getWindowSurface window
      bitmap <- makeBitmap screenSize surface
      return $ InterfaceState window bitmap

@chrisdone
Copy link
Member

Right, so your window title bar is in high DPI so I think the whole window is.

Your screenSize variable is let's say 800x600 pixels. But OS X will make a window of something twice the size of that. Your makeBitmap function should take that into account. Try doubling the size of the vector e.g. makeBitmap (let V2 w h = screenSize in V2 (w*2) (h*2)) surface. If that looks better then you can get the proper scalex and scaley like I did in #172.

@biglambda
Copy link
Author

biglambda commented Oct 27, 2017

I finally have some time to work on this.
So I'm not using Cairo, I have my own rasterizer. Essentially all I do is get the buffer from the SDL surface with this code:

makeBitmap :: Point2 IntSpace -> SDL.Surface -> IO Bitmap
makeBitmap size surface =
    do  let width  = fromIntegral . unISpace . unOrtho . pX $ size
            height = fromIntegral . unISpace . unOrtho . pY $ size
        --let numPixels = width * height
        --ptr <- mallocBytes (fromIntegral numPixels * sizeOf (undefined :: CUInt))
        ptr <- castPtr <$> SDL.surfacePixels surface
        return Bitmap { bitW   = width
                      , bitH   = height
                      , bitPtr = ptr
                      }

I write pixel values directly to that buffer and then I update the display using this code:

updateDisplay :: StateT InterfaceState IO ()
updateDisplay =
  do window <- use interfaceWindow
     liftIO $ SDL.updateWindowSurface window

And I still get a low DPI output. Most frustrating thing ever :)

@schell
Copy link
Contributor

schell commented Oct 27, 2017

Sorry if I missed something or this is distracting, but I have what I think to be high DPI working in one of my programs, which I set up with the following:

  let ogl = defaultOpenGL{ glProfile = Core Debug 3 3 }
      cfg = defaultWindow{ windowOpenGL      = Just ogl
                         , windowResizable   = True
                         , windowHighDPI     = True
                         , windowInitialSize = V2 640 480
                         }

Then in my main loop I can query the window with v2Cint <- get $ windowSize window, where
v2Cint shows V2 640 480. Similarly if I use v2Cint <- glGetDrawableSize window I will get the size of the entire current framebuffer, which is at 2x (V2 1280 960). Without the windowHighDPI = True entry in cfg the two will both show V2 640 480.

@nickkuk
Copy link
Contributor

nickkuk commented Oct 27, 2017

@biglambda, you will get lower resolution, if you created window with windowHighDPI = True and then call makeBitmap with value that you passed to windowInitialSize or with value from windowSize. You should call your makeBitmap function with value from glGetDrawableSize instead.

@chrisdone
Copy link
Member

Right, @biglambda you need to make your bitmap at least twice the size. Otherwise whatever you give it that is smaller will get stretched to fill the canvas.

@biglambda
Copy link
Author

@schell, how do you get access to the 1280x960 framebuffer itself, if you want to write to it directly?

@nickkuk
Copy link
Contributor

nickkuk commented Oct 28, 2017

@biglambda, if you use SDL.Video.Renderer with SDL.Video.Renderer.Texture and your purpose is to fill fullscreen, you need to create this texture from SDL.Video.Renderer.Surface of size obtained from SDL.Video.OpenGL.glGetDrawableSize, then call SDL.Video.Renderer.copy renderer texture Nothing Nothing.

If you use pure OpenGL api, you need to create and fill OpenGL texture with

foreign import ccall unsafe "glTexImage2D" glTexImage2D ::
  GLenum -> GLint -> GLenum -> GLuint -> GLuint -> GLint -> GLenum -> GLenum -> CString -> IO ()

...

  V2 w h <- glGetDrawableSize window
  glTexImage2D GL_TEXTURE_2D 0 format (fromIntegral w) (fromIntegral h) 0 format GL_UNSIGNED_BYTE ptr

where format = GL_RGBA or GL_RGB and ptr is pointer to array of pixels in specified format of size w * h.
Then you need to draw it to window framebuffer by shader.

@biglambda
Copy link
Author

biglambda commented Oct 31, 2017

Ok, thanks for that insight. I finally have it working. I think the secret so far has been to forget about using the surface from the window.
Here are the SDL specific functions I'm currently using.

startInterface :: Point2 IntSpace -> IO InterfaceState
startInterface screenSize =
  do  SDL.initializeAll -- [SDL.InitEvents, SDL.InitVideo]
      version <- SDL.version
      putStrLn $ "SDL Version: " ++ show version

      let openGL = SDL.defaultOpenGL{ SDL.glProfile = SDL.Core SDL.Normal 3 3 }
      let windowConfig = SDL.WindowConfig
                        { SDL.windowBorder       = True
                        , SDL.windowHighDPI      = True
                        , SDL.windowInputGrabbed = False
                        , SDL.windowMode         = SDL.Windowed -- SDL.FullscreenDesktop
                        , SDL.windowOpenGL       = Just ogl
                        , SDL.windowPosition     = SDL.Absolute (Point2 10 10)
                        , SDL.windowResizable    = True
                        , SDL.windowInitialSize  = V2 (fromIntegral . unISpace . unOrtho . pX $ screenSize )
                                                      (fromIntegral . unISpace . unOrtho . pY $ screenSize )
                        }
      window <- SDL.createWindow (Text.pack "Window Title") windowConfig
      let rendererConfig = SDL.RendererConfig
                       { SDL.rendererType          = SDL.AcceleratedVSyncRenderer
                       , SDL.rendererTargetTexture = True
                       }
      renderer <- SDL.createRenderer window 0 rendererConfig
      return $ InterfaceState window renderer

updateDisplay :: (Bitmap -> IO ()) -> StateT InterfaceState IO ()
updateDisplay drawOn =
  do  window   <- use interfaceWindow
      renderer <- use interfaceRenderer
      liftIO $ do  (V2 width height) <- SDL.glGetDrawableSize window
                   --putStrLn $ "drawableSize: " ++ show (V2 width height)
                   texture <- SDL.createTexture renderer SDL.ARGB8888 SDL.TextureAccessStreaming (V2 width height)
                   (ptr, _) <- SDL.lockTexture texture Nothing
                   bitmap <- makeBitmap (fromIntegral width) (fromIntegral height) ptr
                   drawOn bitmap
                   SDL.unlockTexture texture
                   SDL.copy renderer texture Nothing Nothing
                   SDL.present renderer

I'm afraid though that there is an additional unnecessary copy operation versus using SDL.updateWindowSurface. What do you think?

@nickkuk
Copy link
Contributor

nickkuk commented Oct 31, 2017

@biglambda, if you create SDL.Renderer with SDL.rendererType = SDL.AcceleratedVSyncRenderer and have suitable drivers, you will get hardware accelerated renderer. You can check if you have an acceleration with SDL.getRendererInfo after creation; here SDL.rendererInfoName will be one of

  • "direct3d" (it is DirectX9);
  • "direct3d11" (DirectX11);
  • "opengl";
  • "opengles";
  • "opengles2";
  • "PSP";
  • "software".

If your renderer is not software, function SDL.copy (and another functions that work with SDL.Renderer) is very fast and actually is shader call; all SDL.Textures are just numbers - "names" for textures that are stored in the memory of video card. Also SDL.present renderer is just flipping of textures in video memory.

On the other hand, SDL.Surface lives in RAM. So we have three classes of slower functions

  • CPU -> CPU (all functions for SDL.Surface -> SDL.Surface);
  • CPU -> GPU (SDL.updateWindowSurface, SDL.lockTexture and writing, SDL.createTextureFromSurface);
  • GPU -> CPU (SDL.getWindowSurface, SDL.lockTexture and reading).

I think, that you should always use SDL.Renderer and SDL.Texture when possible. Use SDL.Surface to load pictures from .jpg, .png, .bmp files, then store them in different SDL.Textures, or in texture atlas, packed to single SDL.Texture.

For example, to create and use atlas you can

  1. create big SDL.Texture with SDL.TextureAccessTarget flag for atlas;
  2. set it to SDL.rendererRenderTarget (then all SDL.copy functions will copy to the atlas);
  3. load SDL.Surface from .jpg, .png, .bmp by using sdl2-image (for pictures) or from .ttf by sdl2-ttf (for letters);
  4. call SDL.createTextureFromSurface to create SDL.Texture of the same size;
  5. call SDL.copy to copy texture from 4) to desired place on atlas; you don't need SDL.present renderer here;
  6. go to 3) if need;
  7. set SDL.rendererRenderTarget $= Nothing (then all SDL.copy functions will copy to window);
  8. in updateDisplay draw textures in any places with any rotations by SDL.copyEx (you can draw one texture, e.g. letter, many times), then call SDL.present renderer.

Also you can get some information on your question here.

@biglambda
Copy link
Author

biglambda commented Oct 31, 2017

Ok interesting, the last step of my "drawOn" function is actually an OpenCL kernel that fills a buffer for every pixel in the window. So it's too bad but it seems like I currently copy the buffer back to the host memory and then back to video memory again. Is there a way to allocate a texture and then refer to that as an OpenCL buffer?

@nickkuk
Copy link
Contributor

nickkuk commented Nov 1, 2017

@biglambda, you should try to do the following

  1. create SDL.Renderer with SDL.rendererInfoName="opengles2" or "opengl"; to ensure this before creation you can set the following hint:
SDL.setHintWithPriority SDL.OverridePriority SDL.HintRenderDriver SDL.OpenGLES2
  1. create empty SDL.Texture with size from SDL.glGetDrawableSize;

  2. now you need to find out OpenGL "name" of texture from the previous step:

type GLint = Int32
type GLenum = Word32
pattern GL_TEXTURE_BINDING_2D :: forall a. (Num a, Eq a) => a
pattern GL_TEXTURE_BINDING_2D = 0x8069
pattern GL_TEXTURE_2D :: forall a. (Num a, Eq a) => a
pattern GL_TEXTURE_2D = 0x0DE1
foreign import ccall unsafe "glGetIntegerv" glGetIntegerv :: GLenum -> Ptr GLint -> IO ()

  ...
  SDL.glBindTexture texture
  glName <- alloca (\p -> glGetIntegerv GL_TEXTURE_BINDING_2D p >> peek p)
  SDL.glUnbindTexture texture
  ...
  1. use clCreateFromGLTexture2D with texture_target=GL_TEXTURE_2D, miplevel=0, texture=fromIntegral glName to create OpenCL image object;

  2. in your updateDisplay function you need to draw on this OpenCL image object, then do SDL.copy renderer texture Nothing Nothing >> SDL.present renderer; you don't need to allocate textures or any memory during this frame drawing step.

@biglambda
Copy link
Author

Thanks a lot, I'm working on trying to implement this, I'll let you know how it goes.

@biglambda
Copy link
Author

biglambda commented Nov 5, 2017

Ok I think I'm close to having this working currently I'm getting an error from the OpenCL when I try to run clCreateFromGLTexture2D.

glName: 1
"[CL_INVALID_GL_OBJECT] : OpenCL Error : Bad texture object"
"[CL_INVALID_GL_OBJECT] : OpenCL Error : Image creation from a GL object failed."

Not sure about the right approach to debug this.

@nickkuk
Copy link
Contributor

nickkuk commented Nov 5, 2017

In this old discussion someone wrote about the same behavior in SDK examples due to the driver. Do you have a possibility to run some small existing OpenGL-OpenCL interop example?

@biglambda
Copy link
Author

Ok, thanks for pointing me in the right direction.

I think page 10 of this document http://sa10.idav.ucdavis.edu/docs/sa10-dg-opencl-gl-interop.pdf gets into how to do this OpenGL-OpenCL interop on a mac.

It looks like there are a few functions that are needed that, I think, don't have bindings in the current OpenCL package that I'm using namely: CGLGetCurrentContext, CGLGetShareGroup.
I looks like @acowley has been down this road.

https://gist.github.com/acowley/cdac93e3b580b65bd7d2#file-clglinterop-hs

I'm going to see if I can get some of this working in my code.

@acowley
Copy link

acowley commented Nov 5, 2017

Indeed I do OpenGL-OpenCL interop on macOS all the time. Let me know if you run into any trouble, but the code you linked should get you going.

@biglambda
Copy link
Author

Hi so what I decided to do was modify @acowley's CLUtil package to include TextureObject parameters and I included an example program that uses SDL to display his QuasiCrystal kernel using the CLGLinterop. You can find that forked repository here: https://github.com/biglambda/CLUtil
I wasn't completely sure but it seems to run a lot faster than the buffer copying version.

@nickkuk
Copy link
Contributor

nickkuk commented Nov 10, 2017

@biglambda, in your example, why do you create texture in every frame? You can create it once at the beginning.

@nickkuk
Copy link
Contributor

nickkuk commented Nov 10, 2017

@biglambda, more precisely, you should create it in the beginning and recreate on window resize event.

@biglambda
Copy link
Author

Cool, I just pushed a version that does that.

@biglambda
Copy link
Author

Something I noticed is that this only runs on the CPU so far.
If you change line 170 in my example program from:
clState <- initFromGL CL_DEVICE_TYPE_ALL
to:
clState <- initFromGL CL_DEVICE_TYPE_GPU

For my first device, an Intel Iris Pro, I get.

[CL_INVALID_DEVICE] : OpenCL Error : clCreateCommandQueue failed: Unable to locate device 0x1024500 in context 0x7fc09516bb30.
TestCLGL: CL_INVALID_DEVICE

For my second device an AMD Radeon R9 M370X Compute Engine (which comes up as having the display in my system report) I get a black output. If I switch back to CPU it works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants