New GUI Enhancement Question

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

05 Mar 2020

Timmy Crowne wrote:
05 Mar 2020
Would anyone who’s familiar with the guts of Reason know if offloading drawing to the GPU would also improve CPU performance? That would be a cool bonus for large projects.
GPU acceleration would allow the CPU to stick to calculations related to audio processing, resulting in better performance. Short answer: yes.

The problem with fixing the graphics issue is the brand is anchored in decades of investment in rasterized graphics (bitmaps). These graphics were what we had at the time Reason was first made, but now they will need to be replaced with something mathematical to accommodate all display types and resolutions (vectors).

Extensions actually contain 4K ready assets right now. But the ability to toggle those assets on has never been implemented. Why? Because simply allowing for bigger bitmaps only kicks the can, it doesn't solve the problem of fixed resolution graphics. It's still inexcusable to charge a higher price for 4K ready modules and then disallow use of the graphics, especially when inclusion of those graphics is required for certain price tiers in the shop.

The toughest problem about the render engine revamp is that the way a module displays will remain in the hands of its maker, meaning more work for everybody currently selling. It's no secret that a lot of devs have basically walked from the format, meaning those people probably wouldn't return just to make a scalable vector interface. You would end up with a really disjointed "cluttered" rack, rather than a cohesive modern one.

Not an easy issue to fix because the bedrock of the application is based on fixed resolution assets. I agree with Enoch that this isn't a "few weeks" thing, unless they've been quietly working on this for many years and are only hinting at it now.

EDIT: How could I forget. The elephant in the room is that you would have to re-code pretty much all the modules if a vector interface was put in.

This is because currently, a lot of corners can be cut by using data sets whose size is equal to the discreet number of steps in a knob's rotation, for instance.

Those steps are determined in part by the graphics (how many frames of animation are there for the knob). That's why M-Class EQ has those particular odd frequencies that it can dial in, but not others. Those frequencies happen to lie on that particular frame of the knob turn, or they are in a data set that contains discreet values at those knob positions.

Believe it or not, it is common for modules to use data sets that contain values whenever possible to save on expensive calculations in real time. Basically exploiting the limitations and coarse granularity of the bitmaps to reduce work and squeeze performance where it matters. This means that some internal aspects of modules won't scale seamlessly if something like a vector display is used because those things are somewhat hard coded to be efficient, not calculated.

This may have been fixed despite the difficulty. There is a reason the stock devices are branded "classic" now. Even so, you can see how this situation is a rather tall house of cards that starts falling over if you update the graphics engine.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

Rackman
Posts: 110
Joined: 28 Dec 2019

06 Mar 2020

Hidpi is fast becoming the standard now, but will only creep higher in the near future. Today's 4k is tomorrow's 8k. They might be able to fudge something taking advantage of the hi-res assets already supplied by RE devs, but all that does is bring them up to the current standard. They are still not future-proof. The only way to do that would be to use vector graphics (as VCV Rack have done), bit that boat sailed long ago. Most RE devs have rightfully grown tired of RS's antics and departed long ago, and won't be back to update their REs

User avatar
buddard
RE Developer
Posts: 1247
Joined: 17 Jan 2015
Location: Stockholm
Contact:

06 Mar 2020

Sorry, but there are so many misconceptions here that I can't help answering...

I have absolutely 0 inside information about what Reason Studios are up to, but I have been involved in the development of 20+ released REs by now, so I'd like to believe that I have a reasonable grasp on how the RE SDK works.
Raveshaper wrote:
05 Mar 2020
The problem with fixing the graphics issue is the brand is anchored in decades of investment in rasterized graphics (bitmaps). These graphics were what we had at the time Reason was first made, but now they will need to be replaced with something mathematical to accommodate all display types and resolutions (vectors).
As per the information recently revealed by Mattias, they are adapting their graphics engine to make use of GPU acceleration.
What are GPUs good at? Drawing lots and lots of textured polygons, i e to quickly draw rasterized graphics, such as panel backgrounds and widget film strips. Among other things, GPUs have on-board texture memory for this very purpose.
So I highly doubt that they're implementing vector graphics, nor do they need to. They set high requirements on graphics resolution in the RE specification from the beginning, and now they can hopefully reap the rewards from making that decision 8+ years ago.
Extensions actually contain 4K ready assets right now.
The resolution of the GUI assets is actually higher than that. For REs using the 2D graphics pipeline, the resolution is 5 times higher than that of the current Reason rack. The older 3D pipeline is a little different since the geometry could be rendered at way, way higher resolutions before you would start to see vertices in "round" objects, but I'd say the overall limit is about the same due to the size limit on textures. Knobs would stay sharp, but the panel textures would start to get blurry.

To give you an idea of the resolution of a 2D RE, here's a detail from a full preview render of Sequences:

Sequences_Preview.png
Sequences_Preview.png (72.66 KiB) Viewed 6575 times

This render was done just using the tools that come with the RE SDK, and it only uses the assets that we've actually delivered to Reason Studios.
Note that this small section of the GUI has about the same width in pixels as the entire width of the current rack in Reason (719px vs 754px).
But the ability to toggle those assets on has never been implemented. Why? Because simply allowing for bigger bitmaps only kicks the can, it doesn't solve the problem of fixed resolution graphics.
Why is "fixed resolution graphics" a problem? They could be rescaled on the fly, with or without using mip maps that could be rendered directly from the original assets that every developer included with their RE deliveries to Reason Studios.
It's still inexcusable to charge a higher price for 4K ready modules and then disallow use of the graphics, especially when inclusion of those graphics is required for certain price tiers in the shop.
Who's charging "a higher price for 4K ready modules"?
And what do you mean by "certain price tiers"?
High resolution graphics is a requirement for every RE, it's completely independent of price tiers.
It's something that is inherent in the RE format itself.
The toughest problem about the render engine revamp is that the way a module displays will remain in the hands of its maker, meaning more work for everybody currently selling. It's no secret that a lot of devs have basically walked from the format, meaning those people probably wouldn't return just to make a scalable vector interface. You would end up with a really disjointed "cluttered" rack, rather than a cohesive modern one.
I doubt that Reason Studios have any plans whatsoever to use scalable vector graphics in the rack. Why would they do that?
What they're doing now is probably just overcoming the performance issues with displaying higher resolution graphics, i e making use of the GPU that basically every computer has nowadays.

So implementing high resolution rendering of the Reason rack will have 0 impact on RE developers, past, present or future. This was one of the key ideas behind the RE SDK.
This is because currently, a lot of corners can be cut by using data sets whose size is equal to the discreet number of steps in a knob's rotation, for instance.

Those steps are determined in part by the graphics (how many frames of animation are there for the knob). That's why M-Class EQ has those particular odd frequencies that it can dial in, but not others. Those frequencies happen to lie on that particular frame of the knob turn, or they are in a data set that contains discreet values at those knob positions.

Believe it or not, it is common for modules to use data sets that contain values whenever possible to save on expensive calculations in real time. Basically exploiting the limitations and coarse granularity of the bitmaps to reduce work and squeeze performance where it matters. This means that some internal aspects of modules won't scale seamlessly if something like a vector display is used because those things are somewhat hard coded to be efficient, not calculated.
I'm sorry, but this is simply not true.

The frame count in widget filmstrips is completely independent from the resolution of the underlying motherboard properties.
The limitation on accuracy when moving knobs etc is entirely related to the mouse dragging distance.
There are several examples where you can see that a value is changed without the knob moving, especially if you set mouse sensitivity to the highest setting and hold Shift while dragging.

I could create a knob from 5 frames, and if I map it to a non-stepped number property I will get exactly the same resolution interacting with it as I would when using a knob with 63 frames, or 127 frames.
This may have been fixed despite the difficulty. There is a reason the stock devices are branded "classic" now. Even so, you can see how this situation is a rather tall house of cards that starts falling over if you update the graphics engine.
As you might have guessed, I strongly disagree with this conclusion. ;)

User avatar
chimp_spanner
Posts: 2926
Joined: 06 Mar 2015

06 Mar 2020

The resolution on that little screengrab was so crisp O_________O

So assuming these graphical requirements have been part of the RE format for 8+ years, the only devices that need redoing are surely the legacy ones, right? Which in the scheme of things...isn't a lot of devices.

What I'd really love is a dynamic zoom. Like, modifier + mouse-wheel/scroll in and out at the location of the cursor. Some days my eyes just aren't up to the job and I'd love to be able to zoom right in on a particular device. I really hope this is part of the high res GUI plan!

User avatar
EnochLight
Moderator
Posts: 8412
Joined: 17 Jan 2015
Location: Imladris

06 Mar 2020

buddard wrote:
06 Mar 2020
Sorry, but there are so many misconceptions here that I can't help answering...

I have absolutely 0 inside information about what Reason Studios are up to, but I have been involved in the development of 20+ released REs by now, so I'd like to believe that I have a reasonable grasp on how the RE SDK works.
Raveshaper wrote:
05 Mar 2020
The problem with fixing the graphics issue is the brand is anchored in decades of investment in rasterized graphics (bitmaps). These graphics were what we had at the time Reason was first made, but now they will need to be replaced with something mathematical to accommodate all display types and resolutions (vectors).
As per the information recently revealed by Mattias, they are adapting their graphics engine to make use of GPU acceleration.
What are GPUs good at? Drawing lots and lots of textured polygons, i e to quickly draw rasterized graphics, such as panel backgrounds and widget film strips. Among other things, GPUs have on-board texture memory for this very purpose.
So I highly doubt that they're implementing vector graphics, nor do they need to. They set high requirements on graphics resolution in the RE specification from the beginning, and now they can hopefully reap the rewards from making that decision 8+ years ago.
Extensions actually contain 4K ready assets right now.
The resolution of the GUI assets is actually higher than that. For REs using the 2D graphics pipeline, the resolution is 5 times higher than that of the current Reason rack. The older 3D pipeline is a little different since the geometry could be rendered at way, way higher resolutions before you would start to see vertices in "round" objects, but I'd say the overall limit is about the same due to the size limit on textures. Knobs would stay sharp, but the panel textures would start to get blurry.

To give you an idea of the resolution of a 2D RE, here's a detail from a full preview render of Sequences:


Sequences_Preview.png


This render was done just using the tools that come with the RE SDK, and it only uses the assets that we've actually delivered to Reason Studios.
Note that this small section of the GUI has about the same width in pixels as the entire width of the current rack in Reason (719px vs 754px).
But the ability to toggle those assets on has never been implemented. Why? Because simply allowing for bigger bitmaps only kicks the can, it doesn't solve the problem of fixed resolution graphics.
Why is "fixed resolution graphics" a problem? They could be rescaled on the fly, with or without using mip maps that could be rendered directly from the original assets that every developer included with their RE deliveries to Reason Studios.
It's still inexcusable to charge a higher price for 4K ready modules and then disallow use of the graphics, especially when inclusion of those graphics is required for certain price tiers in the shop.
Who's charging "a higher price for 4K ready modules"?
And what do you mean by "certain price tiers"?
High resolution graphics is a requirement for every RE, it's completely independent of price tiers.
It's something that is inherent in the RE format itself.
The toughest problem about the render engine revamp is that the way a module displays will remain in the hands of its maker, meaning more work for everybody currently selling. It's no secret that a lot of devs have basically walked from the format, meaning those people probably wouldn't return just to make a scalable vector interface. You would end up with a really disjointed "cluttered" rack, rather than a cohesive modern one.
I doubt that Reason Studios have any plans whatsoever to use scalable vector graphics in the rack. Why would they do that?
What they're doing now is probably just overcoming the performance issues with displaying higher resolution graphics, i e making use of the GPU that basically every computer has nowadays.

So implementing high resolution rendering of the Reason rack will have 0 impact on RE developers, past, present or future. This was one of the key ideas behind the RE SDK.
This is because currently, a lot of corners can be cut by using data sets whose size is equal to the discreet number of steps in a knob's rotation, for instance.

Those steps are determined in part by the graphics (how many frames of animation are there for the knob). That's why M-Class EQ has those particular odd frequencies that it can dial in, but not others. Those frequencies happen to lie on that particular frame of the knob turn, or they are in a data set that contains discreet values at those knob positions.

Believe it or not, it is common for modules to use data sets that contain values whenever possible to save on expensive calculations in real time. Basically exploiting the limitations and coarse granularity of the bitmaps to reduce work and squeeze performance where it matters. This means that some internal aspects of modules won't scale seamlessly if something like a vector display is used because those things are somewhat hard coded to be efficient, not calculated.
I'm sorry, but this is simply not true.

The frame count in widget filmstrips is completely independent from the resolution of the underlying motherboard properties.
The limitation on accuracy when moving knobs etc is entirely related to the mouse dragging distance.
There are several examples where you can see that a value is changed without the knob moving, especially if you set mouse sensitivity to the highest setting and hold Shift while dragging.

I could create a knob from 5 frames, and if I map it to a non-stepped number property I will get exactly the same resolution interacting with it as I would when using a knob with 63 frames, or 127 frames.
This may have been fixed despite the difficulty. There is a reason the stock devices are branded "classic" now. Even so, you can see how this situation is a rather tall house of cards that starts falling over if you update the graphics engine.
As you might have guessed, I strongly disagree with this conclusion. ;)
I just want to give you major props for having such a measured and well thought out rebuttal while remaining professional and respectful. This is one of the few forums where this sort of thing can be discussed without people going nuts -thanks for being a part of this forum!!
Win 10 | Ableton Live 11 Suite |  Reason 12 | i7 3770k @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro | Akai MPC Live 2 & Akai Force | Roland System 8, MX1, TB3 | Dreadbox Typhon | Korg Minilogue XD

User avatar
fieldframe
RE Developer
Posts: 1038
Joined: 19 Apr 2016

06 Mar 2020

Rackman wrote:
06 Mar 2020
Hidpi is fast becoming the standard now, but will only creep higher in the near future. Today's 4k is tomorrow's 8k. They might be able to fudge something taking advantage of the hi-res assets already supplied by RE devs, but all that does is bring them up to the current standard. They are still not future-proof. The only way to do that would be to use vector graphics (as VCV Rack have done), bit that boat sailed long ago. Most RE devs have rightfully grown tired of RS's antics and departed long ago, and won't be back to update their REs
While it's true that we can expect display technology to continue to advance in the future, there are a few problems with this argument. The "2x" pixel density on a Retina MacBook Pro is already high enough that most people can't make out individual pixels from a normal viewing distance. Sure, Apple will eventually go to 3x because they can, but most people won't notice (did anyone notice when iPhones went from 2x to 3x on the iPhone X?). Furthermore, even this doesn't matter, because as Buddard mentions, the RS Shop already requires 5x image assets.

What actually will require a lot of rework for hiDPI is not any REs, but all the pre-Reason 6 first-party devices, something I'm surprised that no one seems to be talking about. On top of the major effort to rewrite Reason's UI code with modern frameworks, everything from Thor and Kong to RV-7 and the Spiders will have to have its UI and all of its filmstrips redone, which will be no small feat.

One more thing for everyone using "4K" as shorthand for hiDPI: 4K is just a resolution, not a density. A 24" 4K monitor might be driven at 2x pixel density, while a 27" monitor might be driven at 1.5x. A 30" 4K monitor could conceivably be run at 1x, although everything would be pretty small. So Reason already has "4K support" as long as you run your 4K monitor at 1x! 😛

User avatar
JiggeryPokery
RE Developer
Posts: 1176
Joined: 15 Jan 2015

06 Mar 2020

buddard wrote:
06 Mar 2020
Sorry, but there are so many misconceptions here that I can't help answering...

I have absolutely 0 inside information about what Reason Studios are up to, but I have been involved in the development of 20+ released REs by now, so I'd like to believe that I have a reasonable grasp on how the RE SDK works.

As per the information recently revealed by Mattias, they are adapting their graphics engine to make use of GPU acceleration.
What are GPUs good at? Drawing lots and lots of textured polygons, i e to quickly draw rasterized graphics, such as panel backgrounds and widget film strips. Among other things, GPUs have on-board texture memory for this very purpose.
So I highly doubt that they're implementing vector graphics, nor do they need to. They set high requirements on graphics resolution in the RE specification from the beginning, and now they can hopefully reap the rewards from making that decision 8+ years ago.
Extensions actually contain 4K ready assets right now.
The resolution of the GUI assets is actually higher than that. For REs using the 2D graphics pipeline, the resolution is 5 times higher than that of the current Reason rack. The older 3D pipeline is a little different since the geometry could be rendered at way, way higher resolutions before you would start to see vertices in "round" objects, but I'd say the overall limit is about the same due to the size limit on textures. Knobs would stay sharp, but the panel textures would start to get blurry.
You've corrected a number of misconceptions, but what you should point out is the resolution of the art assets is a misleading usage. A 754px crop of a full-size image looks like it's a zoom is so going to look "crisp" and showy at non-4K resolutions, and big for those viewing at 4K, but it's unlikely the device will be that big in 4K in typical usage. Resolution: it's all relative!

In theory, the SDK is set up enough to handle an 8K rack, because of the fact that the SDK devices are produced for 3850 or 4096, and these mythical HiDPI devices are unlikely to be full width at 4K.

Now, that the 3D pipeline could support high vertice rendering is somewhat untrue, given one wouldn't normally apply huge LODs even for a circle, you'd only add enough for it to look enough like a circle at 4096px, anything more was a) wasteful, and b) likely to get one's device rejected for excessive LODs ;) , but regardless, what is true is the render size is limited for the exact reason you note: the artwork doesn't go higher than is required for 4K (or 8K assuming a maximum device width of ~50% of the screen at an 8K resolution, i.e a device width of 3840px). Thinking about 4K specifically however, given that's the size most of us are looking to right now, 4K is 3840px, so the maximum render for a 4K monitor realistically will have to be a fair bit smaller than that, or it'll be wider than the screen, given one needs a margin for things like scroll bars, or window frames.

So while it's easy to understand people "wow"ing at the high res pic you showed this could be rather misleading without explaining.

With a "4K rack", it's not that devices are zoomed in or especially "bigger". The rack should effectively look as sharp and big at 4K as it does now at the old [W]SVGA/[W]SXGA-compatible format Reason's rack still uses at their respective resolutions and dot pitches, e.g. you'd have stacks of many devices with two or three columns visible, right up to 8K! We'd return to something like the same relative size, give or take a few %, at the increased resolution and/or small dot pitch (important even at 1080 resolutions on 13/15" laptops).

So what peeps are probably looking at is where the rack device widths will merely double to say, 1650px width per device (a little more than twice the size it is now), which would be an improvement that may be enough at 4K. Now, individual items could potentially be zoomable in a pop-out format like VSTs instead, but those would still be limited to a max width ~3800px. Or a full-rack zoom from "classic" 754 to 3840 "full device" width. Any thought of zooming more just to get a blurry image would be as dumb as a bag of spanners, as the provided artwork doesn't support it.

But remember you have the issue of texture memory for every device in the full rack size, and whether it can be dynamically loaded as each device appears on screen, or all has to be loaded upfront to allow fast scrolling and zooming through the rack without display glitches. People here talk about GPU acceleration and it's easy to fall in a trap with high expectations. In particular those of you who've made these devices on HEDTs, or who play a lot of games and upgrade to the latest gear regularly, it's natural to think of he's talking about latest 2080ti or something! Nope. I mean, I've only got a 1070 for starters! :lol: But seriously, it's potentially all got to render on that Intel-integrated PoS with 2GB texture memory with RAM share on a mid-range FHD 4GB laptop from 2014. It's got to be lowest common denominator stuff. There may have to be tempered tiers of performance expectations and "sorry, just ain't gonna work on your Walmart laptop" type cutoff, but there's still got to be a minimum acceptable state, and that's non-AMD/Nvidia laptop graphics.

So the SDK gui was, by design or accident, set up for an 8K resolution provided the maximum render size is the equivalent of a two rack-column device width at most. But hell, really it's just Samsung trying to push 8K (check out this video for why 8K is simply not needed for domestic purposes: a 4K OLED is way sharper than a Sammy 8K QLED in this sort-of blind test/demonstration, the reasons why are fascinating, so do watch it through,
The 3D pipeline was likely developed to allow the possibility to rasterize at any size up to 4K. It's doubtful there was any serious intent beyond a whiteboard brainstorm for a vector-based rack, after all, while it looked really cool in RED at its native scale, it looks utterly shocking once you zoom out and the LOD reduces and the bitmap blurs, and you can't zoom in from the native scale as the textures then degrade and pixelate/blur.

The sensible approach is re-rasterizing both panels and filmstrips to smaller sizes pretty much on the fly from original full-size art (this is likely what e.g. Arturia's V Collection do, and they, at least, do this pretty damn fast, however it's feasible they handle controls a little differently to allow better re-rasterizing effeciency, whereas with the RE SDK it's always a fixed filmstrip, and these can be massive, plus a VST is a singular device, not a multiple devices all having to be rerendered simultaneously) so all devices would ship with just the original full-size panel and filmstrip artwork. Note this would mean a choice of preset zoom, whether within the rack or as a VST-style pop-up window, not a free zoom with a mouse wheel like a Photoshop image. However, given PH's bare-minimum-standard-for-release implementation history, frankly, I don't see this happening now, it's too urgent an issue given the real demand for it and their need to constantly please, especially given they've already abandoned or backburnered GPU acceleration at least once in the past two years: it wouldn't surprise me at this point if they simply pre-rendered everything once to a fixed appropriate scale for your resolution when you upgrade/install, from the original full-size art with the re-downloaded REs. But they could have set that up easily enough anytime in the past eight years, and with a professional designer recreating the old GUIs, at most a week per device, and they've had at least 8 years to update those internally too. So either all those have long since been done or they've not done it at all, and were that the case that'd just speak volumes about some of the decision-making that goes on.

Potentially complicating the matter is in the rush to abandon the 3D format, there could be issues with how they apply the scaling divisor leaving widgets slightly misaligned, as 4096 famously isn't too hot at dividing evenly by 5. (This also suggests a lack of forethought choosing 4096 as the original width too).

Presumably the 3D pipeline can't be rendered wider than the 2D one, because the 2D one must be as high as they now intend to go (which makes sense because as established any potential zoomed-in width probably shouldn't be as wide as the full 3840 resolution width). But many early 2D or 3D to 2D converted products may well have issues with control alignments as it was possible to set values that aren't a position that divides by 5. Maybe the rasterizer handles any of those early discrepancies and there's no issue at all. Impossible to know unless they actually showed us WTF the plan was and we could have checked our work and fixed if the need arose. But they opted not to tell us, repeatedly, over 8 years. And as others have already noted, the DAW world's moved on.

stp2015
Posts: 324
Joined: 02 Feb 2016

07 Mar 2020

buddard wrote:
06 Mar 2020
To give you an idea of the resolution of a 2D RE, here's a detail from a full preview render of Sequences:

Sequences_Preview.png

This render was done just using the tools that come with the RE SDK, and it only uses the assets that we've actually delivered to Reason Studios.
Note that this small section of the GUI has about the same width in pixels as the entire width of the current rack in Reason (719px vs 754px).
OMG, I hope this is what it is going to look like. I will have a visually triggered orgasm every time I open Reason and load up Sequences! :P :P :P

User avatar
Reasonable man
Posts: 589
Joined: 14 Jul 2016

07 Mar 2020

All the posts in this thread lol

Image

User avatar
BRIGGS
Posts: 2137
Joined: 25 Sep 2015
Location: the reason rack

07 Mar 2020

Reasonable man wrote:
07 Mar 2020
All the posts in this thread lol

Image
lolz!!! :lol:

Image
r11s

Rackman
Posts: 110
Joined: 28 Dec 2019

07 Mar 2020

Most people sit so far away that they can't see the definition of 4k screens, but that hasn't stopped them buying them in droves. And you also need to take into account zooming into whatever resolution is the standard.

It's definitely great to hear that REs have higher than 2* resolution bitmap graphics included, but this only really covers display at *1 on hidpi screens with enough REs for a *2 zoom. Any more than that will be pixel-city.
fieldframe wrote:
06 Mar 2020
Rackman wrote:
06 Mar 2020
Hidpi is fast becoming the standard now, but will only creep higher in the near future. Today's 4k is tomorrow's 8k. They might be able to fudge something taking advantage of the hi-res assets already supplied by RE devs, but all that does is bring them up to the current standard. They are still not future-proof. The only way to do that would be to use vector graphics (as VCV Rack have done), bit that boat sailed long ago. Most RE devs have rightfully grown tired of RS's antics and departed long ago, and won't be back to update their REs
While it's true that we can expect display technology to continue to advance in the future, there are a few problems with this argument. The "2x" pixel density on a Retina MacBook Pro is already high enough that most people can't make out individual pixels from a normal viewing distance. Sure, Apple will eventually go to 3x because they can, but most people won't notice (did anyone notice when iPhones went from 2x to 3x on the iPhone X?). Furthermore, even this doesn't matter, because as Buddard mentions, the RS Shop already requires 5x image assets.

What actually will require a lot of rework for hiDPI is not any REs, but all the pre-Reason 6 first-party devices, something I'm surprised that no one seems to be talking about. On top of the major effort to rewrite Reason's UI code with modern frameworks, everything from Thor and Kong to RV-7 and the Spiders will have to have its UI and all of its filmstrips redone, which will be no small feat.

One more thing for everyone using "4K" as shorthand for hiDPI: 4K is just a resolution, not a density. A 24" 4K monitor might be driven at 2x pixel density, while a 27" monitor might be driven at 1.5x. A 30" 4K monitor could conceivably be run at 1x, although everything would be pretty small. So Reason already has "4K support" as long as you run your 4K monitor at 1x! 😛

User avatar
Oquasec
Posts: 2849
Joined: 05 Mar 2017

07 Mar 2020

Interesting.
Maybe they could experiment with the plugin first and add the changes to the daw later.
Like make the plugin vectorized.
Producer/Programmer.
Reason, FLS and Cubase NFR user.

User avatar
xboix
Posts: 281
Joined: 22 Oct 2019

07 Mar 2020

Most "scalable" VSTs are not truly scalable. They just have several set magnifications like 125%, 150% and 200%. I assume they just have a set of bitmaps for each of the set magnifications. Good enough and much easier than converting everything to vector.

User avatar
buddard
RE Developer
Posts: 1247
Joined: 17 Jan 2015
Location: Stockholm
Contact:

07 Mar 2020

JiggeryPokery wrote:
06 Mar 2020
You've corrected a number of misconceptions, but what you should point out is the resolution of the art assets is a misleading usage. A 754px crop of a full-size image looks like it's a zoom is so going to look "crisp" and showy at non-4K resolutions, and big for those viewing at 4K, but it's unlikely the device will be that big in 4K in typical usage. Resolution: it's all relative!
Well, that's why I said that the render was meant to show off the resolution of the GUI, not the size. :)

I don't think anyone believed that was an actual zoom level -- Why would you ever need to zoom in to a level beyond the device covering the full width of the screen? Especially when all REs are designed to be usable at smaller sizes.

The highest DPI Apple Retina display (12" MacBook) has 226 ppi. It means that a full res RE, which is 3770 pixels wide, will be 16.7 inches wide.
The new Apple Pro Display XDR (32") has 218 ppi, i e a full res RE would be 17.3 inches wide, still more than half the screen.
Now, that the 3D pipeline could support high vertice rendering is somewhat untrue, given one wouldn't normally apply huge LODs even for a circle, you'd only add enough for it to look enough like a circle at 4096px, anything more was a) wasteful, and b) likely to get one's device rejected for excessive LODs ;) , but regardless, what is true is the render size is limited for the exact reason you note: the artwork doesn't go higher than is required for 4K (or 8K assuming a maximum device width of ~50% of the screen at an 8K resolution, i.e a device width of 3840px).
Yes, and this is easy to illustrate with another render, this time from an RE using 3D graphics (Euclid):

Euclid_Knob_Zoomed.png
Euclid_Knob_Zoomed.png (74.99 KiB) Viewed 6230 times

The issues with geometry detail are barely starting to become apparent here (especially the knob's outer circle), but the panel texture is already a blur.
This is at a ridiculously high zoom level, needless to say.

Maybe the rasterizer handles any of those early discrepancies and there's no issue at all. Impossible to know unless they actually showed us WTF the plan was and we could have checked our work and fixed if the need arose. But they opted not to tell us, repeatedly, over 8 years. And as others have already noted, the DAW world's moved on.
I think the simple reason that they didn't tell us is that they didn't know.
Or do you think they sit at a long table in their secret underground lair (right next to the shark tank!) and carefully plan what they're not going to tell us this time? :lol:
It's basically impossible to know what display technology will look like in 8 years.
When they drafted the RE spec (2010 or 2011?), 1024x768 was a common display standard, so 4096 was a pretty reasonable maximum texture size to assume.

Anyway, in the end I don't think it will matter much, the resolution of the format is definitely enough to get us by, at least until we all get terapixel display adapters surgically implanted into our visual cortex. ;)

User avatar
EnochLight
Moderator
Posts: 8412
Joined: 17 Jan 2015
Location: Imladris

07 Mar 2020

buddard wrote:
07 Mar 2020
Or do you think they sit at a long table in their secret underground lair (right next to the shark tank!) and carefully plan what they're not going to tell us this time? :lol:
:lol: :lol: :lol:

I legitimately believe that, sadly, some devs actually do think this. :shock: :lol:

My hopes for the GUI/GPU stuff that is being worked on is that it will accomplish two things at least, but hopefully these three:
  • Shift all (or at least most) GUI operations to the GPU to free the CPU for more audio/DSP related stuff
  • Allow the rack to be re-sized natively, in Reason (not relying on your OS), so that the Rack devices will appear larger on 1080p-8K displays without appearing blurry.
  • Allow some way of making a single rack device that is "in focus" to be resized full-screen (or at least appear substantially larger) utilizing the above two methods.
Accomplish those three things, and I think the vast majority of users complaining about the GUI will be happy (well, aside from those who still hate skeuomorphism in their DAW/plugins). :lol:

Something tells me the first 2 things are very likely, and the 3rd thing may or may not ever see the light of day. :(
Win 10 | Ableton Live 11 Suite |  Reason 12 | i7 3770k @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro | Akai MPC Live 2 & Akai Force | Roland System 8, MX1, TB3 | Dreadbox Typhon | Korg Minilogue XD

User avatar
JiggeryPokery
RE Developer
Posts: 1176
Joined: 15 Jan 2015

07 Mar 2020

buddard wrote:
07 Mar 2020


I think the simple reason that they didn't tell us is that they didn't know.
Or do you think they sit at a long table in their secret underground lair (right next to the shark tank!) and carefully plan what they're not going to tell us this time? :lol:
:lol:

Indeed, but likewise it's all about scale and relativity! It's like the plane at the end of Casablanca. That's not a large shark tank very far away, that's really a very small tank and you're actually standing next to it. Those are just Dwarf Lanternshark's ;)

User avatar
fullforce
Posts: 849
Joined: 18 Aug 2018

07 Mar 2020

EnochLight wrote:
07 Mar 2020
[*]Shift all (or at least most) GUI operations to the GPU to free the CPU for more audio/DSP related stuff
The impact will be negligible.
[*]Allow the rack to be re-sized natively, in Reason (not relying on your OS), so that the Rack devices will appear larger on 1080p-8K displays without appearing blurry.
This would be great.
This is a block of text that can be added to posts you make. There is a 255 character limit.

User avatar
pongasoft
RE Developer
Posts: 479
Joined: 21 Apr 2016
Location: Las Vegas
Contact:

07 Mar 2020

As far as I am concerned, besides looking better, I just hope that the upcoming GUI enhancements will actually solve the performance issues I am having with Reason 11 on my MacBook Pro.. I use a 4K monitor and I had to force the application to start in "low resolution" mode so that there would not be as many pops and clicks (and yes I spent hours playing with the buffer size, hyper-threading and whatever other settings... only thing that truly made a difference was lowering the resolution!). So of course it looks like shit, but at least I can use it (did not have that kind of issue with Reason 9 btw). And of course, before you start telling me that my machine is not powerful enough, Logic X works beautifully, hardly uses any CPU and renders everything in glorious 4K... So like I said, hopefully changing the way they do GUI will help with performance...

User avatar
EnochLight
Moderator
Posts: 8412
Joined: 17 Jan 2015
Location: Imladris

07 Mar 2020

fullforce wrote:
07 Mar 2020
EnochLight wrote:
07 Mar 2020
[*]Shift all (or at least most) GUI operations to the GPU to free the CPU for more audio/DSP related stuff
The impact will be negligible.
There's more than a few MacBook Pro users that will disagree with you. ;)

That said, perhaps even some audio/DSP stuff could be offset with the GPU as an option? That would be pretty sweet, too. Hell, I'll just be happy to not have blurry devices when I zoom in. I'm trying not to get my hopes up.
Win 10 | Ableton Live 11 Suite |  Reason 12 | i7 3770k @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro | Akai MPC Live 2 & Akai Force | Roland System 8, MX1, TB3 | Dreadbox Typhon | Korg Minilogue XD

User avatar
JiggeryPokery
RE Developer
Posts: 1176
Joined: 15 Jan 2015

07 Mar 2020

EnochLight wrote:
07 Mar 2020

I legitimately believe that, sadly, some devs actually do think this. :shock: :lol:

My hopes for the GUI/GPU stuff that is being worked on is that it will accomplish two things at least, but hopefully these three:
  • Shift all (or at least most) GUI operations to the GPU to free the CPU for more audio/DSP related stuff
  • Allow the rack to be re-sized natively, in Reason (not relying on your OS), so that the Rack devices will appear larger on 1080p-8K displays without appearing blurry.
  • Allow some way of making a single rack device that is "in focus" to be resized full-screen (or at least appear substantially larger) utilizing the above two methods.
It's a very funny joke. The visual image presented is farcical and amusing, but remember for all the jocularity at its heart is a simple truth: "PH don't talk publicly about what they're working on".

Yet they seem to keep mentioning HiDPI support, so given that policy is they don't talk publicly about what they're working on, if someone's repeatedly mentioning it over the various forum and flora of the internet and saying things like they're "acutely aware of the issue", and "working hard on it", then you need to ask why they're saying that, because logically it's therefore not something being actively worked on precisely because they don't talk about what they're working on! :shock: :lol:

Come on, you can't have it both ways.

So they're saying these things publicly because it's now an active problem that they don't have it in terms of sales, at least making the sop that it's happening in order to reassure users and keep getting the upgrade revenues, hoping R10 users will upgrade to R11 thinking that maybe hiDPI will come in the R11 cycle. My understanding is that's currently not the case. That might change of course. As I've said before, I think Reason still looks quite fine at non-UHD revenues on a large desktop monitor. But on an FHD laptop or 4K desktop, it's really a fucking shambles. And let's not be comparing it to other DAWs as it matters not what anyone else is currently doing or not doing, to say "oh well, Ableton or whatever doesn't support it". That's a fallacy, and in any case what is true is that more and more VST/AU devices support HiDPI regardless of whether this DAW or that DAW has made the switch in its underlying GUI. Rack Extensions, the plugin format as of today, does not support HiDPI. It's utterly irrelevant that the SDK and devs have to provide support for it. The end user does not use the SDK.

But, wait, hang on, I've got to comment on this: their infamous comment about being "acutely aware", well, the only sensible response to that is to say "but you've been acutely aware of it for a decade!" It's no good our friend buddard there, sorry budd! :P , trying to rewrite history above saying "It's basically impossible to know what display technology will look like in 8 years", because they exactly DID know what the display technology would look like in 8 years time! They wrote about it, at some length, in 2012! :roll: :lol: :lol: :lol:

And given technically the leadtime is probably a bit longer, you'd have to assume most of these plans were in place by late 2010/early 2011 at the latest, so it's really nine to ten years even. But sadly the focus seems to have been on the window dressing, rather than the window itself. You can put all sorts of shiney toys in the display, but you gotta clean the window so people can see it!

I don't know about the complexity of such a process, I'd guess the original plans didn't work out and it left them a bit high and dry, or the state of the graphics code base is more complex to untangle in a way they could, I dunno, convert everything to OpenGL etc etc. There's a ton of valid reasons, which is unfortunate of course, but the info I've had from sources is it's proved harder than "expected". But I pointed out perhaps the simplest solution above: re-rasterize the GUI size at a selected size on the initial install startup. It's clunky, and arguably a lazy approach but it'd add HiDPI support very easily. Props however have doubled down with a strange bit of specificity: they're going for GPU acceleration. Sure, it can be related to adding HiDPI, but it's not the same thing: they can be mutually exclusive. So they're not adding HiDPI because they've opted to tie it to adding GPU acceleration, but they can't get the GPU acceleration to work? (Now I think about it, it's unclear why one needs full GPU acceleration for a GUI that runs at ~25fps and below. Indeed, existing devices are tied to that even if they improve the FPS, as pauses had to be hardcoded in order to get the data to display correctly. :/ )

As for your latter points, it would be risky to expect too much in the way of DSP performance gains in all areas, similar to how SMT turned out to be a bit of a dead horse in most cases (I'm not suggesting it's comparable to that, mind, just saying it'd be better to expect little, and perhaps the result might be better than expected). In the desktop space we're now well into the era of creatives moving to 8+ core processors, and it's been generally thought that Reason has reserved one core for the graphics ever since it went multi-core. If Reason hands most of the display operation to a GPU, one might suspect it would still need to reserve a core in order to communicate with it and handle live user interactions without interrupting live DSP processing, or maybe that's an area where SMT would have some more tangible benefit? The point being, I'm not sure it'll have that much real-world impact on DSP performance but of course would be happy to be pleasantly surprised :p . On that entirely theoretical basis, clear performance % gains may actually be higher on machines with fewer cores, e.g., laptops, even where the GPU is entirely Intel-bound.

Worthless though it is, my view is your Hope 3 is far more likely than Hope 2. I just cannot see Reason Studios trying to figure out re-sizing the entire rack on the fly at the flick of a button, given they can't even run replace colour without a full program reboot. The pop-up full-width or near-full width window type of zoom seems far, far more likely, and I'll double down on the default rack size chosen on install or after a restart, and as colour schemes are now. If GPU acceleration is essential, then an outside possibilty they might be looking at could be something like easy divisible x5 zoom depths that would result in the sharp image, e.g 25/50/75/100%, as you might see in Photoshop. Still, that's potentially a lot of work for a busy rack, given that Photoshop isn't a snappy as it used to be handling just one image when it wasn't GPU accelerated!

User avatar
fieldframe
RE Developer
Posts: 1038
Joined: 19 Apr 2016

07 Mar 2020

JiggeryPokery wrote:
07 Mar 2020
(Now I think about it, it's unclear why one needs full GPU acceleration for a GUI that runs at ~25fps and below. Indeed, existing devices are tied to that even if they improve the FPS, as pauses had to be hardcoded in order to get the data to display correctly. :/ )
I think you answered your own question here: the MacOS 9 and Windows 98-era software renderer that has been drawing Reason for twenty years can’t top 30fps even on a modern computer. Now imagine trying to throw four times as many pixels through that creaky pipeline! Rewriting the renderer is a prerequisite to hiDPI.
JiggeryPokery wrote:
07 Mar 2020
Worthless though it is, my view is your Hope 3 is far more likely than Hope 2. I just cannot see Reason Studios trying to figure out re-sizing the entire rack on the fly at the flick of a button, given they can't even run replace colour without a full program reboot. The pop-up full-width or near-full width window type of zoom seems far, far more likely, and I'll double down on the default rack size chosen on install or after a restart, and as colour schemes are now. If GPU acceleration is essential, then an outside possibilty they might be looking at could be something like easy divisible x5 zoom depths that would result in the sharp image, e.g 25/50/75/100%, as you might see in Photoshop. Still, that's potentially a lot of work for a busy rack, given that Photoshop isn't a snappy as it used to be handling just one image when it wasn't GPU accelerated!
The “restart to change color scheme” is pretty clearly caused by the limitations of the prehistoric rendering engine. Once they’re rendering all the backgrounds and filmstrips as textured triangles, it becomes like a game engine; zooming should be smooth and interactive, maybe using trackpad pinch-to-zoom. I wouldn’t be surprised if they adopt modern practices on texture management, aggressively unloading offscreen bitmaps and LOD-ing them in as you scroll, possibly resulting in some “blur-into-focus” loads if you’re jumping from one end of the rack to another at a high zoom.

User avatar
JiggeryPokery
RE Developer
Posts: 1176
Joined: 15 Jan 2015

07 Mar 2020

fieldframe wrote:
07 Mar 2020
JiggeryPokery wrote:
07 Mar 2020
(Now I think about it, it's unclear why one needs full GPU acceleration for a GUI that runs at ~25fps and below. Indeed, existing devices are tied to that even if they improve the FPS, as pauses had to be hardcoded in order to get the data to display correctly. :/ )
I think you answered your own question here: the MacOS 9 and Windows 98-era software renderer that has been drawing Reason for twenty years can’t top 30fps even on a modern computer. Now imagine trying to throw four times as many pixels through that creaky pipeline! Rewriting the renderer is a prerequisite to hiDPI.
JiggeryPokery wrote:
07 Mar 2020
Worthless though it is, my view is your Hope 3 is far more likely than Hope 2. I just cannot see Reason Studios trying to figure out re-sizing the entire rack on the fly at the flick of a button, given they can't even run replace colour without a full program reboot. The pop-up full-width or near-full width window type of zoom seems far, far more likely, and I'll double down on the default rack size chosen on install or after a restart, and as colour schemes are now. If GPU acceleration is essential, then an outside possibilty they might be looking at could be something like easy divisible x5 zoom depths that would result in the sharp image, e.g 25/50/75/100%, as you might see in Photoshop. Still, that's potentially a lot of work for a busy rack, given that Photoshop isn't a snappy as it used to be handling just one image when it wasn't GPU accelerated!
The “restart to change color scheme” is pretty clearly caused by the limitations of the prehistoric rendering engine. Once they’re rendering all the backgrounds and filmstrips as textured triangles, it becomes like a game engine; zooming should be smooth and interactive, maybe using trackpad pinch-to-zoom. I wouldn’t be surprised if they adopt modern practices on texture management, aggressively unloading offscreen bitmaps and LOD-ing them in as you scroll, possibly resulting in some “blur-into-focus” loads if you’re jumping from one end of the rack to another at a high zoom.
Ah, I getcha! That makes a ton a sense actually. I can happily shut up again for another month :)

User avatar
EnochLight
Moderator
Posts: 8412
Joined: 17 Jan 2015
Location: Imladris

08 Mar 2020

fieldframe wrote:
07 Mar 2020
Once they’re rendering all the backgrounds and filmstrips as textured triangles, it becomes like a game engine; zooming should be smooth and interactive, maybe using trackpad pinch-to-zoom. I wouldn’t be surprised if they adopt modern practices on texture management, aggressively unloading offscreen bitmaps and LOD-ing them in as you scroll, possibly resulting in some “blur-into-focus” loads if you’re jumping from one end of the rack to another at a high zoom.
This idea is so perfect in execution, I almost had a tear fall from my eye - this is completely how I would hope that it would be handled. A creeping feeling tells me that we won't be so lucky as how you describe it, but still - that would be sweet.
Win 10 | Ableton Live 11 Suite |  Reason 12 | i7 3770k @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro | Akai MPC Live 2 & Akai Force | Roland System 8, MX1, TB3 | Dreadbox Typhon | Korg Minilogue XD

Post Reply
  • Information
  • Who is online

    Users browsing this forum: calebbrennan and 23 guests