NOVA - Optimizing perf, noise and temperature

maxff9

Bronze Level Poster
I reduced further at what voltage the GPU reaches the maximum frequency. 887 mV => 1905 MHz (start of the line)
Here are the result:
GPU max voltage: 0.894 V
GPU max temp: 73°C (still the same but again... short run and fans are indeed very efficient !)
GPU max freq: 1905 MHz
FPS average: 63.7 (+- 8.5)

So you are saying that I should have a line much lower ? Like start the line at 887 mV => 1800 MHZ ?
Or should I continue to find the lowest voltage possible for 1905 MHz ?

Thanks for being here.

Edit: this time I reduced the maximum frequency. I don't see any change in perf (according to those FPS numbers) since I started changing the freq(volt) curve...
GPU max voltage: 0.906 V
GPU max temp: 73°C
GPU max freq: 1755MHz
FPS average: 62.0 (+-7.6)

Another try, overclocking at low voltage but maximum frequency was reduced to 1710 MHz. Curve:
freqlessvolt2.JPG
GPU max voltage: 0.844 V
GPU max temp: 70°C (gained 3°C yeah !)
GPU max freq: 1710 MHz
FPS average: 62.0 (+-8.0)
Let's see if this gets stable when I am playing long sessions.

Finally, using a curve similar to the one you proposed in another topic (1620 MHz @844 mV, I got those Firestrike results (I posted in the other topic as well).
Chassis and SpecFirestrike
3DMark
GraphicsPhysicsAvg. GPU
Temps
URL
* Nova 15" / R5 3600 / RTX 2070 @
1620 at 844 mV
17092189091864863°CLink
Why my Graphics are so low compared to yours (~20000) ? I saved 3°C but don't reach your nice results. (of course the temperature depends on the room temp, you can tell me if you did it outdoor in December haha)

Question: If I have similar performance at the same voltage for 1700 or 1900 MHz, which frequency should I choose ? Whatever ?
 
Last edited:

FerrariVie

Super Star
Question: If I have similar performance at the same voltage for 1700 or 1900 MHz, which frequency should I choose ? Whatever ?
Yeah, I would say the one that provides you with a great ratio between temps and performance.... which looks like to be 1710Mhz?
 

maxff9

Bronze Level Poster
Couldn't play with the laptop as much as I wanted.
I have several questions if you are still not too annoyed !
  • Do we have a 2070 Mobile or Mobile Refresh ? how to be sure ?
  • During my tests, I didn't monitor the consumption (in Watts). Is that important ? Could the CPU or GPU could be limited by that ?
  • What does mean the "Boost clock" ~1400 MHz of the RTX 2070 ? Should I lock the max freq below that for very old games for instance ?
  • My 3DMark Firestrike result is lower than before but I kept the ~60 FPS average on Borderlands 3 regardless of my configuration freq(voltage). Is the 3Dmark more accurate ?
 

FerrariVie

Super Star
Couldn't play with the laptop as much as I wanted.
I have several questions if you are still not too annoyed !
Hey, no problem at all!

  • Do we have a 2070 Mobile or Mobile Refresh ? how to be sure ?
From what I know, the Nova chassis uses the refreshed (2020) version of both the 2060 and 2070. You can know that by looking at the memory and default GPU clocks (base and boost) in GPU-Z (free tool). The regular 2070 mobile should look like this and get around 19400 points in Firestrike graphics:
2070b.jpg


While the refreshed should look something like this and get around 20400 in Firestrike graphics:
01-jpg.155049


  • During my tests, I didn't monitor the consumption (in Watts). Is that important ? Could the CPU or GPU could be limited by that ?
The GPU will never go above 115W while the CPU can boost temporarily to 88W, but usually is around 65W. In summary, you don't need to monitor that if you don't want to, as the BIOS will make sure that the power is under those limits. But it's just interesting to know how much you're spending while gaming :D or how much you're limiting when adjusting clock limits.

  • What does mean the "Boost clock" ~1400 MHz of the RTX 2070 ? Should I lock the max freq below that for very old games for instance ?
It usually will do that automatically for you, like if the game is not using 100% of the GPU, the clocks will go down by default. However, nothing prevents you from doing that, if you want to make sure you don't have brief clock spikes.

Give it a try on an old game without limiting clocks and check on the Afterburner overlay what clocks your GPU is boosting to.
  • My 3DMark Firestrike result is lower than before but I kept the ~60 FPS average on Borderlands 3 regardless of my configuration freq(voltage). Is the 3Dmark more accurate ?
My opinion is that Firestrike is more accurate indeed and if you're getting the same 60 FPS on borderlands even after limiting clocks, then it might mean one of the 2 things:
  • That for this particular game it's not going to make much of a difference in terms of FPS, but it is reducing temps which is excellent;
  • Or the built-in benchmark is really bad and doesn't measure performance correctly. :D
However, keep in mind that even if it doesn't drop FPS in real gaming on borderlands, it should drop for heavier games.
 

maxff9

Bronze Level Poster
Ok we have the refresh for sure, it's even written in our Fire Striker online results. I was thinking the refresh would be less good because the mem clock is lower but it's not the case. Well, good for us !

Follow up questions:
  • If we ask for less voltage, the consumption (watts) should be reduced. Ultimately, is that the voltage input or the consumption that impacts the temperature ?
  • Is the temperature of the CPU impacting the GPU ? Because this thing is always at 80°C.
  • How can we use the advertised Boost Clock value to improve our settings ?
  • Things I should do: run somes tests with/without W10 Game mode.
  • Check Nvidia control panel

I will only use 3D Mark Firestrike results to compare my settings. I don't think Borderlands 3 was the most appropriate tool.
I played around, learned few things, it's already good. Time to check a bit more and keep in mind that I want similar performance will less noise. The end goal is the noise, not the temperature. But it's linked and don't want to melt my brand new toy. So let's find the sweet spot first and then I'll check the fans.

In the Firestrike database, the people with a RTX 2070 Mobile Refresh, with a max mem clock of 1375 MHz and max freq of 1920 MHz, can reach a GPU Score of 21458.
List of results from 3D mark: Link
At the end, what matters for the score is the average clock frequency. Higher the better. But does that translate automatically in good performance ingame ? And what is the impact on the temperature of a high average frequency ?
I have the feeling that the best performance are obtained with:
  • Make the curve flat after the max freq.
  • Increase the frequency for all voltage points before the max.
  • But does it mean that if the game doesn't require high frequency, it won't be able to go down in clock speed ? We don't care right, the only thing that matters is the voltage/temperature. If the GPU delivers more clock speed that required, we don't care. Please correct me if I am wrong.


I don't know very well the processes behind the GPU performance. How running at high freq/low voltage impacts the physical processes there. I wouldn't want to kill my card by demanding a too high freq with a too low voltage. Is that possible ?


Results to be updated:
GPU Graphics scoreTemp (°C)Clock freq (MHz)Av. freq (MHz)CurveURL
20436 (stock)6919201687DefaultLink
189096316201610Limit: 1620@844mVLink
 
Last edited:

FerrariVie

Super Star
  • If we ask for less voltage, the consumption (watts) should be reduced. Ultimately, is that the voltage input or the consumption that impacts the temperature ?
Not sure about the answer here, but my understanding is that they're directly linked, like when your voltage input is higher, it will allow your card to clock higher without graphical glitches or crashes (which could happen with purely undervolt, without reducing clocks at the same time).
  • Is the temperature of the CPU impacting the GPU ? Because this thing is always at 80°C.
Absolutely. On desktops, the impact is minimal because they're physically distant and each chip has it's own heatsink with a fan (or more) directly on top, so the only thing that can impact both at the same time is the case internal temperature.

On laptops is totally different, as the fans are not directly on top of the chips and they're cooled down by heatpipes, which are also mostly shared between CPU and GPU. This is both good and bad, as it makes sure that the most used chip is being cooled by both fans all the time, but on the other hand, the hotter chip will also make the cooler one run hot.

At some point, you'll also need to address that issue with the CPU, either by repasting with a better thermal paste (ideal solution) or by manually undervolting the CPU. I would recommend that you go this path after the GPU tweaking and before starting to mess with custom fan curves.

  • How can we use the advertised Boost Clock value to improve our settings ?
I don't use it for anything as, in my opinion, the clock that starts the power limit throttling is more important.

I will only use 3D Mark Firestrike results to compare my settings. I don't think Borderlands 3 was the most appropriate tool.
I played around, learned few things, it's already good. Time to check a bit more and keep in mind that I want similar performance will less noise. The end goal is the noise, not the temperature. But it's linked and don't want to melt my brand new toy. So let's find the sweet spot first and then I'll check the fucking fans.
Exactly, after you're happy with the GPU curve and with CPU temperatures, you can start playing around with the custom curve in Control Center just to check if that is good enough for you.

In the Firestrike database, the people with a RTX 2070 Mobile Refresh, with a max mem clock of 1375 MHz and max freq of 1920 MHz, can reach a GPU Score of 21458.
List of results from 3D mark: Link
At the end, what matters for the score is the average clock frequency. Higher the better. But does that translate automatically in good performance ingame ? And what is the impact on the temperature of a high average frequency ?
I have the feeling that the best performance are obtained with:
  • Make the curve flat after the max freq.
  • Increase the frequency for all voltage points before the max.
  • But does it mean that if the game doesn't require high frequency, it won't be able to go down in clock speed ? We don't care right, the only thing that matters is the voltage/temperature. If the GPU delivers more clock speed that required, we don't care. Please correct me if I am wrong.
I don't know very well the processes behind the GPU performance. How running at high freq/low voltage impacts the physical processes there. I wouldn't want to kill my card by demanding a too high freq with a too low voltage. Is that possible ?
You got it right (mostly). They are indeed limiting the max clock to around 1900Mhz to keep temps under control and prevent many fluctuations, as well as using lower voltages with higher clocks (overclocking). So, of course, overclocking will give you more performance (not just in benchmarks, but games in general), but no ones knows what impacts it will have on the longevity of your hardware components and it will void your warranty. Even if the chip runs at lower temperatures, overclocking might be stressing other related components.

For now, I would not recommend using GPU overclock, as the 2070 is still quite good for today's standards and you can run anything with ultra settings. 3 years from now, that's when it might start showing it's age and that's when overclocking will start to make sense, since its lifespan will not be much of an issue anymore. I used to overclock my 4-year-old and far out-of-warranty Dell laptop, because it was already very slow and I wanted to replace it shortly anyway, so it gave me a few extra months of good usage. It is now 5 years old and it's still working with no issues, but the intel i5 6300HQ + GTX 960m are quite bad (around 4k in firestrike with maximum overclock that it could handle with stock voltages )
 
Last edited by a moderator:

maxff9

Bronze Level Poster
Thanks again for the clarification :)
After tweaking the GPU freq(volt) curve, I'll play longer sessions of stuff and monitor the temperature etc. After that, I will open the chassis to see the CPU thermal paste. On a scale from 0 to 10, how difficult is it to change the paste ? Have you done it ? If yes, can you recommend me the best product ?

I think you scared me enough haha. I will only limit the max freq/max voltage and not have any higher higher clocks at low voltage for now. If some day I can't run some games, I will reconsider. Next tests are to see what is the max freq is the sweet spot I like. It's just a bit annoying that all those improvements perf/temp will probably not change anything about the noise xD
 

FerrariVie

Super Star
Thanks again for the clarification :)
After tweaking the GPU freq(volt) curve, I'll play longer sessions of stuff and monitor the temperature etc. After that, I will open the chassis to see the CPU thermal paste. On a scale from 0 to 10, how difficult is it to change the paste ? Have you done it ? If yes, can you recommend me the best product ?
I haven't done it yet, simply because my CPU is quite cool and it doesn't go above 70° at stock with manual undervolt, and that together with a 1650 / 1700Mhz curve limit works well enough for me. But I'm considering repasting it when we get close to summer, as my home gets quite hot during that period.

For me, only 2 thermal paste options are good enough: Noctua NT-H2 or Thermal Grizzly Kryonaut, but I tend to prefer the Noctua one (just because of the tube size and brand preference). Noctua has a perfectly sized tube with 3.5g, which should allow you to repaste CPU and GPU at least 2 times (or more), while I'm not sure if 1g on the Kryonaut will be enough for CPU + GPU together. They both have bigger sized tubes (10g), but that is a waste of money if you have just 1 or 2 devices around.

Regarding how hard it is... well, I haven't done it yet but by looking at the board I can say that the Nova should be one of the easiest chassis to replace the paste, as there is no plastic parts locking the heatsink down, just the motherboard screws and one fan, that all come out as one single piece (all that copper is one single part). Some other brands make it so hard that you even need to disassemble the whole chassis to remove the heatsink, which is not the case here.

1612907113013-png.22169


The only piece of advice (which I read here on the forum from other members that already did that on their Novas) is to keep the laptop turned on (idle) for a few minutes before turning it off to remove the heatsink, as this causes the current paste to undry and makes it easier to remove the whole piece. Just be careful not to burn yourself by using the black plastic stickers to lift it up and do not use brute force :D

@demon28 did it on his, look at the pictures below:

I think you scared me enough haha. I will only limit the max freq/max voltage and not have any higher higher clocks at low voltage for now. If some day I can't run some games, I will reconsider. Next tests are to see what is the max freq is the sweet spot I like. It's just a bit annoying that all those improvements perf/temp will probably not change anything about the noise xD
It depends on how much FPS you're willing to lose in order to reduce noise at each specific time. But one thing at a time, we still have CPU and fans to mess around with.
 

demon28

Bronze Level Poster
Repasting the Nova is a fairly straight forward process. After you remove the screws you will need to slide(not pull) the back cover.
Ryzen CPU's do not lock in the CPU socket like Intel CPUS do. It is a known issue with AMD CPU's and it is possible to pull the CPU out of the socket when pulling the heatsink out, which although unlikely might cause damage. In order to avoid that I would advise you to run a CPU stress test for 5-10 minutes just to heat up the thermal paste and make it less sticky. Remove the heatsink gently, you can use the black stickers to gently lift the heatsink . Start with a little amount of force and gradually increase it until you can feel the heatsink moving.

As @FerrariVie said, the entire heatsink comes off in one piece and one of the fans is attached to the heatsink, so, not only you'll have to remove the screws for the left hand side fan, you will also need to unplug it and plug it in after you are done with repasting. The screws are numbered(1-8 cant remember how many they are) so make sure to unscrew them starting from the highest number heading to the lowest one. After repasting tighten the screws following the order of the numbers 1-->8.
I have tried using the dot method,X method and manually spreading the thermal paste. I got the best results using the X method on the CPU along with a small dot in the middle of each V (courtesy of @SuicidalChick3n ) , and a long line for the GPU. As for thermal paste I used Kryonaut for GPU and a thicker paste(which names escapes me right now) for my CPU. I am not too sure if a more viscous paste actually makes a noticeable difference, if any. Kryonaut is good but wears quickly when under high temperatures, hence why I only used it for my GPU. I think I have covered everything, if you have any questions let me know! :)
 

maxff9

Bronze Level Poster
I really think I'll have to go this route, whatever I do the temperature of the GPU seems to be the same and the CPU is going up to 90 degrees in my tests. I'll order the Noctua NT H2, cheaper here. The CPU is staying at 72°C while not even (very) solicited but the GPU cools down easily from 70°C to 42°C in seconds.

Here are some results for FireStrike.

1613513635447.png



1613513654202.png
1613513659501.png


I can't reach the stock performance with LINE settings (putting a straight line to prevent increasing the voltage).
But if force a constant clock speed (1710 MHz), I get better results. What is the main downside of forcing a constant clock speed ? I could have three profiles with three constants frequencies. One for browsing with lowest freq, one medium for old games and one at 1710 MHz for big games. What's the catch ? I guess there is a reason why GPUs have a freq(volt) curve right...

The temperature should not be taken as totally relevant as long as I didn't repaste. Thanks for all the advices by the way.
 
Last edited by a moderator:

FerrariVie

Super Star
I can't reach the stock performance with LINE settings (putting a straight line to prevent increasing the voltage).
But if force a constant clock speed (1710 MHz), I get better results. What is the main downside of forcing a constant clock speed ? I could have three profiles with three constants frequencies. One for browsing with lowest freq, one medium for old games and one at 1710 MHz for big games. What's the catch ? I guess there is a reason why GPUs have a freq(volt) curve right...

The temperature should not be taken as totally relevant as long as I didn't repaste. Thanks for all the advices by the way.
The temps should not be totally relevant as they're likely to drop with better paste, however the variance between each config should still be there after repasting, just in smaller numbers. Which already answers your question about the fixed clocks.... the reason why there is a curve is to drop clocks (and temperature) when the GPU is not being fully used, like on loading scenes, menu screen or on not heavy areas (like inside small rooms). Your GPU will never be on 100% usage all the time, doesn't matter how heavy the game is. So having a fixed clock will basically make your GPU switch between ~1000Mhz and 1710Mhz, which will cause your temperatures to go up, very likely higher than the stock curve.
 

maxff9

Bronze Level Poster
Ok so the fixed Clock Speed is never a good idea. Let's discard this idea.
I think I finally understand better why the temperature is not lowering so much even tough I decrease the max voltage. It is simply because I modify the maximum freq/voltage but what matters is the freq/voltage over time = the average value. My max freq are different in my table but the average freq is almost always the same (because the benchmark is quite demanding right). This is in accordance with your answer about fixed frequency. I forced a high frequency when the game (ie 3Dmark) was not even requiring it.

But still I don't understand the huge change in performance from stock to 1920max. The average freq is almost the same (-7 MHz), but I lost 5-10 FPS and saved 0-1°C. How preventing the GPU to go above its max freq could limit its performance ? I can't physically go beyond that anyway. I don't get it. Or something else changed that I didn't monitor ?

It seems that leaving the stock curve is the wisest thing to do now. Hahaha all those tests to leave the default settings ! But I learned a lot and this is valuable for the future.
 

FerrariVie

Super Star
Ok so the fixed Clock Speed is never a good idea. Let's discard this idea.
I think I finally understand better why the temperature is not lowering so much even tough I decrease the max voltage. It is simply because I modify the maximum freq/voltage but what matters is the freq/voltage over time = the average value. My max freq are different in my table but the average freq is almost always the same (because the benchmark is quite demanding right). This is in accordance with your answer about fixed frequency. I forced a high frequency when the game (ie 3Dmark) was not even requiring it.

But still I don't understand the huge change in performance from stock to 1920max. The average freq is almost the same (-7 MHz), but I lost 5-10 FPS and saved 0-1°C. How preventing the GPU to go above its max freq could limit its performance ? I can't physically go beyond that anyway. I don't get it. Or something else changed that I didn't monitor ?

It seems that leaving the stock curve is the wisest thing to do now. Hahaha all those tests to leave the default settings ! But I learned a lot and this is valuable for the future.
You mean comparing the first row to the third on the below table?
1613513635447-png.22434


If that's what you're talking about, I also don't understand it. They should be pretty close, if not the second being better. The average clock is higher (1704 x 1687), so the final score should have been higher as well. Maybe something on the background was causing the PC to slow down a bit?

Anyway, I also expected your temps to go down a bit more after limiting the clocks. That might mean one thing: the issue might be lying with the CPU and that could be pushing the GPU temps up, independently of what you're trying to do to bring them down.

What I'm going to do to help us understand it is by doing the same on my laptop either today or tomorrow. I'll use those same max clocks that you used and check how much of a difference it makes on my temps. I did those tests when gaming directly, so not using a benchmark. I'll register the results on a table, just like you did, and post the results back here.
 

maxff9

Bronze Level Poster
Yes, comparing the first 3 rows.
I am really thinking about background processes. I should set "experimental conditions" and write down everything to be certain.
I asked my friend who has the exact same laptop as me (all stock settings) and he has very similar CPU temps. Or we both have normal thermals or both are faulty haha.
Anyway I am going to order the thermal paste and look into that. I have to admit that I lost a bit of faith to reduce the fans speed.
What I can also do later - as I have a decent external monitor - is to buy a cooler pad/sucking air stuff and leave the laptop a bit far from me and use a new keyboard.

But first, optimization (because I like it).

Thanks for running those tests, it's around 4-5 min if you skip Physics and Combined tests in 3DMark. We'll become expert of this machine !
 

maxff9

Bronze Level Poster
Yesterday I played Payday 2 and put the settings to the maximum with a FPS limit of 144 (my screen max).
I noticed that the FPS was dropping to ~100 FPS in intense fights but the GPU load was still low (let's say 60% max). Why the GPU was not pushing further ? Awaiting for the CPU (not at 100% neither) ? I don't remember if VSync was on but should not have impact has my FPS < 144 Hz, right ?

The temp didn't go higher than 68°C if I remember well. Playing with the headset was fine, not hearing too much the fans.
 

FerrariVie

Super Star
Yesterday I played Payday 2 and put the settings to the maximum with a FPS limit of 144 (my screen max).
I noticed that the FPS was dropping to ~100 FPS in intense fights but the GPU load was still low (let's say 60% max). Why the GPU was not pushing further ? Awaiting for the CPU (not at 100% neither) ? I don't remember if VSync was on but should not have impact has my FPS < 144 Hz, right ?

The temp didn't go higher than 68°C if I remember well. Playing with the headset was fine, not hearing too much the fans.
Hard to know for sure, but one possibility is that the game does not fully utilise all your CPU's 6 cores/12T, but instead uses like 4C/8T (or less). So you'll have close to 100% usage on those cores, but since the others are idle (or used by other resources like windows), your average CPU usage will be between 60 and 80%.

Another possibility is that the game itself has bugs or lacks optimisation on the code, but I'm just guessing at this point. 100FPS is still more than good enough, though :LOL:
 
Last edited:

FerrariVie

Super Star
I'm trying to run it, but something always gets in the way :D

Yesterday midway thought it I found a windows background process that was hitting my results (was making the score go down, even after I flattened the curve at higher clocks).

Today, after I've run all the tests and was compiling the results, I've noticed that for some weird reason the curve was not sticking correctly as I was still having spikes at 19XXMhz, but lower averages on some of the runs, which was likely a bug in Afterburner. So after restarting the PC it started sticking again, but will need to run all the tests within a short period of time to be as fair as possible with the temperatures between runs. I'll try again this weekend.
 

maxff9

Bronze Level Poster
Thanks for your dedication!
I find it quite tedious to hunt for Windows background processes. I took a look at some softwares to do it automatically but everything seems shady.
I just kill my apps such as Dropbox and other front end apps but I don't know what else could be done.

I hope afterburner doesn't have too many bugs because this would make the whole study very complicated. Or maybe something overriding it ? It's already very complicated with variable not possible to log.
I'll order the thermal paste this weekend and keep this thread updated.
 

SpyderTracks

We love you Ukraine
Thanks for your dedication!
I find it quite tedious to hunt for Windows background processes. I took a look at some softwares to do it automatically but everything seems shady.
I just kill my apps such as Dropbox and other front end apps but I don't know what else could be done.

I hope afterburner doesn't have too many bugs because this would make the whole study very complicated. Or maybe something overriding it ? It's already very complicated with variable not possible to log.
I'll order the thermal paste this weekend and keep this thread updated.
This is a little dated by now as haven't used either for a few years, but I always used to prefer EVGA Precision X over Afterburner, just one of those personal preference things I guess.

It may be worth checking it out though to give comparative results, it may be more or less accurate (although it's now called Precision X1, that's how out of date I am ;)):

 
Top