GPU – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Mon, 01 Mar 2021 00:17:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 What is an Accelerated Processing Unit? https://linuxhint.com/what-is-an-accelerated-processing-unit/ Mon, 25 Jan 2021 22:12:53 +0000 https://linuxhint.com/?p=87667 An Accelerated Processing Unit (APU) is a 64-bit microprocessor that combines the processing potentials of a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit) in a single chip. While APU sounds like any normal computer chip, it actually is solely used by AMD like the brand name of the CPU/GPU combo chips they manufacture. To better understand what an APU is, it is best to have a little background on the two processors that it merged.

As the brain of the computer, the CPU is the main processing unit that receives and executes instructions from computer software or application. Similarly, it sends out instructions to other parts of the system, telling them what to do. It is the most crucial part of a computer system, without it, the computer is basically dead.

The GPU has similar functions as the CPU, but it only processes graphics-related information and renders graphical content. If a computer without a CPU is dead, a computer without a GPU is blind, with no video output.

In most systems, the CPU and GPU are two separate entities. There isn’t really a problem with this except that the data transfer rate will improve if the two processors are closer to each other. Furthermore, these two units operating at the same time results in higher power consumption, and AMD didn’t turn a blind eye to this. In 2011, they introduced their first high-performing and power-efficient processor that combined the advantages of the CPU and GPU into one, single chip, popularly known today as the APU.

Evolution of APU

AMD, as a leading manufacturer of computer electronics, has been transpiring structured and efficient architecture for their CPUs and GPUs. The APUs they have created is usually a merger of their existing CPU and GPU designs. The resulting processor performs better than the average CPU and GPU combined. Before it was known as APU, it was first branded as the “Fusion”. Due to a trademark infringement issue, the term was later on changed to APU.

AMD designs two types of APU, one for high-performance devices and another one for low power devices. The first generation APU for high-performance devices featured K10 CPU cores and Radeon HD 6000-series GPU and was codenamed, Llano. Likewise, the first APU for low-power devices featured the Bobcat microarchitecture and a Radeon HD 6000-series GPU and was codenamed, Brazos. In 2012, AMD released Trinity, the second generation of high-performance APU, and Brazos 2.0, the second generation of low power APU. The APU continued to progress as AMD’s CPU and GPU architecture advanced, with performance as the core of each enhancement. Succeeding generations featured the latest architecture at that time, and each iteration bagged numerous improvements over the previous one. Aside from performance, AMD also improved on upgradability. While earlier releases inhibited future CPU upgrades this was made possible starting with the APU Ryzen series. The 2020 release, Renoir, is based on Zen 2 core architecture and Vega 8 graphics.

APU continues to evolve to this day, and with recent and more advanced architectures from AMD, the release of the next generation of APU is imminent.

Benefits over CPU + GPU

The APU’s game-changing technology is a significant development in the computing industry, and it has several advantages over the CPU + GPU setup.

Better performance. Blending CPU and GPU in the same chip improved the data transfer rate significantly since they are now using the same bus and sharing the same resources. APUs also support OpenCL (Open Computer Language), a standard interface for parallel computing, which utilizes the computing power provided by GPUs. With its multi-core, CPU and GPU, tasks that require the high processing power of a CPU and the fast-image processing of a GPU can take advantage of the performance an APU can offer.

Power-efficient. Combining two chips into one does not only save space but saves power as well. Aside from improving the APU’s performance, AMD also consistently works on reducing the power consumption of the chip despite it already being low power. The recent releases feature low Thermal Design Power (TDP). For example, Ryzen Embedded 1102G features the lowest TDP of only 6W.

Cost-effective. Price is probably the biggest advantage of AMD’s APU over CPU and GPU tandem. With a price tag of ~$100 to ~$400 depending on the features, purchasing an APU generally costs cheaper than buying a CPU and a GPU separately. Though the higher end units are quite pricey, they’re still considerably cheaper than the price of CPU and GPU combined with the same level of performance. This also holds true for future upgrades. Since AMD is now lax when it comes to upgradability and compatibility of APUs, users can save a lot with just a one-chip upgrade compared to upgrading both processors.

Is it a Better Processor?

APUs have been used across different devices such as desktops, laptops, servers, mobile devices, and game consoles. This heterogeneous chip has been patronaged by businesses and consumers for a decade. But can it really replace CPU and GPU? Ultimately, it would depend on the user’s needs and demands.

Consumers, PC builders, and gamers on a budget can turn the benefits of APU to their advantage. Most APUs can provide decent performance. In fact, it can outperform the performance of mid-range CPUs and GPUs. It’s a perfect choice for users who don’t really demand intensive use of graphics and the highest possible performance from a CPU. It will also do great for home and office standard PCs. AMD continues to develop more advanced APUs, and recent releases are already capable of supporting graphics-heavy tasks.

However, when it comes to extreme gaming, an APU won’t suffice. It’s still not able to compete with the graphical experience that high-end discrete graphics cards can offer. For low-budget, entry-level PC building and gaming though, an APU would be an ideal alternative.

APU cannot completely take the place of CPU and GPU, but it is a fitting high-performance, power-efficient alternative in many cases. As AMD’s designs continue to advance and new technologies continue to emerge, it would come as no surprise if the future generations of the APU can fully replace both CPU and GPU.

]]>
Best GPU NVidia https://linuxhint.com/best-gpu-nvidia/ Fri, 15 Jan 2021 12:50:31 +0000 https://linuxhint.com/?p=85786 Thankfully, the COVID19 vaccination is rolling out across the USA. It’s also been a few months since we saw all the major GPU releases. This means one thing. Prices are about to come down a whole lot.  So, this may be the perfect time to invest in the best GPU NVidia. AMD is also on top of their game. However, in this article, we are focusing only on team green’s greatest hits so far.

From speedy high performance (and equally high priced) silicon’s to medium performance (and budget-friendly) GPUs, this is the best you can get in 2021. PS: the price of GPUs goes up and down faster than a pendulum counts a minute. That’s why the value for the money differs depending on what time of the year it is and whether the stocks are tight or not. In any case, let’s first take a look at our Buyer’s Guide, followed by reviews of the best NVidia is offering today.

Best GPU NVidia – A Buyer’s Guide

Of course, clock speeds, memory, memory speed, CUDA cores, ray tracing, and TFLOPs count all essential aspects of assessing a GPU’s performance. But in this section, we will talk about some other factors that need equal consideration. These are:

Farm Factor
Take a look at the dimension of the card. Does your motherboard have ample room to fit in the card? GPUs come in all shapes and sizes: half-height, single-slot, dual, and even triple slot types. If a card is big, it can block an adjacent slot. So if you have a mini motherboard, look for a mini card.

TDP rating
Thermal Design Power or TDP measures heat dissipation. It also shows how much power you need to run a card at stock settings. Generally speaking, if you’re going for any of the best GPU NVidia mentioned above, you will need at least 850W PSU. For regular gaming cards, on the other hand, even 600W is enough.

Power Connectors
Serious gaming cards draw more watts from the power supply than the standard 75W an x16 PCIe can provide. Therefore, you have to connect additional power connectors that come in 6 or 8-pin variations. Some cards have only one of these types, while others can have as many as 8-pin ports on the same card.

Ports
Modern monitors have HDMI/DisplayPort, while older models use a DVI port. Therefore, you should make sure that the card you’re buying has the port your monitor supports. Otherwise, you will need to invest in an adapter (or a new display).

Cooling
THE high-end GPU’s mentioned above dissipate a lot of heat. A cooling mechanism on a card may be different depending on the manufacturer you’re opting for. MSI, EVGA, and ASUS are great GPU manufacturers, but their cooling performance cards may vary. That’s why we recommend investing in a proper liquid cooling mechanism to protect your hefty investment.

1. RTX 3090

NVidia’s latest is a powerful chip. Colossal in size, just as in performance, the RTX 390 isn’t even intended for an average gamer. It targets creative professionals, 8K gamers, and computing-intensive app acceleration. That’s one reason why it doesn’t come with an average price tag, either.

Based on NVidia’s latest Ampere architecture, RTX 3090 boasts 10,496 Cuda cores, 24GB GDDR6X memory, a base clock speed of 1,395 MHz, a memory clock speed of 19.5 GT/s, and 35.68 TFLOPS of performance. During our tests, this card saw a boost of 1,787MHz (average) while peaking at around 70 degrees C. Impressive, right?

As for performance, it is faster than anything we have seen before. NVidia claims that this card can deliver even 8K frame rates @ 60fps in games with ray tracing enabled, thanks to DLSS. We tested this claim with Red Dead Redemption 2, and not surprisingly, the results are off the charts. 8K gaming is a walk in the park for this beast.

Overall, we wouldn’t recommend it for gaming alone. This one is a creator’s card. It has an astonishingly powerful GPU and a massive frame buffer that offers personal creation on a level never seen before. Priced around $2100, RTX 3090 is your guy if you’re looking for the best card in the entire market right now.

Buy Here: Amazon

2. RTX 3080

Coming in second is NVidia’s most popular GPU right now. It’s capable of what can be termed the second-gen of Ray Tracing. The initial Ray Tracing enabled cards required a significant frame rate sacrifice. NVidia’s latest 3000 series takes good care of that.

NVidia has added many more CUDA cores to the mix in this 8nm chip to manage this feat. Besides, it has also updated GPUs Tensor Cores and RT Cores. So, you get extra DLSS goodness along with overcoming the ray tracing challenges.

In terms of specs, the RTX 3080 has 8,704 Cuda cores, 10GB GDDR6x memory, a boost clock speed of 1,710 MHz, a memory clock speed of 19 GT/s, and 29.76 TFLOPs of performance. That’s a wee bit short of our champion 3080 but still more than enough for a prolific 4k gamer.

That said, 3080 is for professional gamers who want smooth frame rates without spending extra on an RTX 3090. It has almost the same features and is overall a better pick if you’re looking for that raw gaming performance. However, if you want 8K gaming @60fps, 3090 obliterates any competition offering this experience.

Buy Here: Amazon

3. Titan RTX

Even though Titan RTX lost the battle to RTX 3080 and RTX 3090 in our list of the best GPU NVidia, it’s still an 8K monster worthy of the top 3. In essence, Titan RTX is intended for content creators, researchers, and digital professionals who’re looking for a top of the line rendering performance.

But Titan RTX is priced higher than in 3090. This begs the question, is it outdated now? Before answering this question, let’s first examine some of its specs and compare the performance to 3090. Only then can you have a better understanding of the two.

Based on NVidia’s Turing architecture, Titan RTX has 4608 Cuda cores, 24GB GDDR6 memory, a boost clock speed of 1770 MHz, a memory clock speed of 14 GT/s bringing 130 TFLOPS of performance. Comparatively, these specs come short of what 3090 and even 3080 has brought to the table. And as anyone can guess, both of them perform better in all performance benchmarks.

However, both RTX 380 and 3090 need 350 Watts of power to run. Titan RTX, on the other hand, requires just 280 Watts. That’s a power difference of 70 Watts, significant enough to cast a massive dent in your yearly electricity bills. So, if long term cost is your primary concern, Titan RTX is worth spending some extra upfront.

Buy Here: Amazon

4. RTX 3070

Well, RTX 3070 has almost doubled in price during the last few months. Still, it’s the only high-end NVidia GPU that can be called reasonably affordable. The RTX 3070 is also impressive because of matching the top-string Turning GPU’s performance, the RTX 2080Ti, for two-third of the price.

The RTX 3070 has 5,888 CUDA cores, 8GB GDDR6 memory, a base clock speed of 1500MHz (Overclock speed 1,725MHz), a memory clock of 14 GT/s, and 20.37 TFLOPs. Plus, DLSS. Results? You get a 4K capable GPU that doesn’t require too much fiddling around to break that coveted barrier in performance. Yes, on par with the RTX 2080 Ti.

Moreover, RTX 3070 is the only GPU in the entire Ampere lineup that offers somewhat reasonable power consumption. It’s rated at 220W, significantly lower than any of its peers.

All in all, RTX 3070 is the best GPU NVidia for most people. It brings 4K gaming on to the mainstream for the first time. DLSS is also a neat trick to improve the card’s overall performance. Other features like Broadcast and Reflex are just icing on the cake at such a comparatively low price.

Buy Here: Amazon

5. RTX 2080 Ti

NVidia’s GeForce RTX 2080 Ti remained a top-dog for a very long time. It only got dethroned by the 3000 series recently. However, it remains a monster in gaming performance. It brings impressive ray tracing, RT cores, and the AI-based Tensor Cores to any GPU intensive task.

The RTX 2080Ti has 4,352 CUDA cores, 11Gb of GDDR6 memory, a boost clock speed of 1635 MHz, a memory clock speed of 1750 MHz, and capable of 13.45 TFLOPs of performance. All thanks to NVidia’s first-ever 90MHz factory overclock.

As for performance, we were able to play Destiny 2 @ 70-100 fps as the game ran on 4K HDR. We noticed the GPU drew more power to reach this level, around 277W, and the PC got hot quickly. So be sure to have some excellent cooling mechanism in place before you start tinkering around with the GPU.

With more and more games like CyberPunk 2077 and COD: MW joining the list of titles that support Ray tracing, RTX 2080Ti comes in handy to get the best performance during a game. However, its steep price is sure to deter some potential users.

Buy Here: Amazon

Final Thoughts

Undoubtedly, the best GPU NVidia right now is RTX 3090. Others like 3090 and RTX Titan come in a close second and third on our list. These cards are the best you can get in 2021 and worth the extra splurge of money. Just keep their intended purpose in mind. For instance, if you don’t need 8K, don’t spend extra on 3080, 3090, or even Titan. 3070 would do just fine in this case. That’s all for now. Thank you for reading!

]]>
Best CPU and GPU combo https://linuxhint.com/best-cpu-gpu-combo/ Mon, 04 Jan 2021 10:34:34 +0000 https://linuxhint.com/?p=84124 While the CPU is the core of your system, it’s rarely a limiting factor. In fact, it’s the GPU that holds the key to your game’s performance. However, when you’re looking to maximize your performance at a particular budget range, you need to have the best CPU GPU combo.

Besides, knowing your needs is also important. For instance, a demanding gamer may not need a higher number of cores, but a multitasking video editor might. So we’ve compiled five pairings to suit most people’s budget and needs. Feast your eyes!

Ryzen 5 3600 With MSI GTX 1660 Super Ventus XS OC

This is one of the best CPU and GPU combos for 1080p resolution at 60fps. It combines AMD’s latest Zen 2 platform with Nvidia’s solid mid-range graphics card giving you smooth 1080p resolution and excellent frame rates without breaking the bank.

The Ryzen 5 3600 boasts six cores and 12 threads for multitasking. The ‘ZEN2’ 7nm process allowed AMD a big leap forwards in clock speeds and overall performance. This CPU is quite capable of achieving high frame rates during action-packed titles as well as light-to-medium intensity productivity work. That’s why it makes for a great versatile CPU without diverting too much of your budget away from the GPU.

While Nvidia GTX 1660 Super doesn’t have the ray-tracing cores of the modern RTX 3000 series. It has NVidia’s NVENC streaming encoder. This means you can broadcast your video streams with nominal system overhead. It also transitioned the GPU to GDDR6 VRAM, giving you a performance boost that rendered the previous GTX 1660 Ti model outdated.

That said, the MSI Ventus SX OC is a solid AIB GTX 1660 Super option for 1080p performance at 60fps. It comes with two fans, a DisplayPort, 3 HDMI, and a reasonable price. You won’t get a better deal.

Buy CPU Here: Amazon

Buy GPU Here: Amazon

Intel i9-10900kf With RTX 3090

If you want no holds barred beast of a machine, look no further. This powerful combination supports both editing 4K videos and gameplays at very high refresh rates. In fact, you can even hit 8K performance at 60 fps resolution.

Intel’s i9-10900kf is a 10 core, 20 threats chip that doesn’t flinch on even the most demanding workloads. Featuring Intel’s Turbo Boost Max 3.0 tech, the unlocked 10th Gen Intel Core desktop processor is optimized for enthusiasts. It’s rather hard to cool down, though, so make sure you invest in a powerful water-based cooler to match the overclock speeds.

Similarly, the RTX 3080 trumps both 2080Ti and the Titan RTX on every major rendering benchmark. Its whopping 24GB VRAM has no match in the market at the moment. The only downside of this combination is its expensive cost, which may be too much for some users.

So whether you’re working with productivity software such as Adobe CS suite or playing the latest AAA titles, this combination offers a significant gain in overall performance. You can throw anything at this pair with confidence.

Buy CPU Here: Amazon

Buy GPU Here: Amazon

Intel Core i5-9600K With RX 5700 Xt

We have the best CPU and GPU Combo for 1080p @ 144 Hz Competitive Gaming in third place. Given adequate cooling and a Z390 board, this CPU chip should easily touch the 5Ghz mark. When you pair it with a high-end GPU like RX 5700 Xt that’s based on AMD’s latest RDNA architecture, expect 1080p (even higher) and 144 Hz frame rates in titles such as Fortnite and Overwatch to run as smooth as the wind. No exaggeration!

From a price point, Intel’s CPU offers a worse value than AMD flagships. That is because it has just 6 cores, no multi-threading, and a higher price tag. Despite these shortcomings, we believe competitiveness trumps value any day. The i5-9600K chip offers better performance in Esports rigs.

With it’s 8GB GDDR6 VRAM, the RX 5700 Xt makes for a worthy partner. For a little low performance (1080p @ 144fps), you can opt for The EVGA RTX 2060 ‘KO..’ However, RX 5700XT outperforms it in higher refresh rates, higher VRAM (8GB), better frame rates, resolution, as well as color depth for the next-gen displays. That combination can give you up to 8K resolution @ 60 Hz or even 5K at 120 Hz.

The XFX RX 5700 Xt supports HDMI 2.0b and a DisplayPort 1.4w, making it compatible with the latest gen of monitors. This pair will hold you back in mid to high-performance needs.

Buy CPU Here: Amazon

Buy GPU Here: Amazon

Ryzen 7 3700X with EVGA GeForce RTX 2080 Super

When it comes to ultra-wide monitors, they take the regular load of 1440p resolution and crank it up a notch. Their rendering demands sit roughly halfway between 4K and QHD. So, CPU choice is less of a concern than picking the right GPU. And it’s actually the GPU that will test the depth of your pockets.

However, the Ryzen 7 3700X is a good choice offering ample performance in a value package. It has a 4.4 Max boost and comes with 8 cores, 16 processing threads for multi-threading your programs. That is more than enough for any ultra-wide monitor.

Coming back to the GPU, if you want max performance at this resolution, your options are limited. Of course, you can go for a range-topping option like the latest RTX 3000, but the GeForce RTX 2080 Super offers better value. It’s capable of achieving reasonable FPS @ 1440p ultrawide resolutions.

EVGA is a reputable name in the industry. Their RTX 2080 comes with real-time ray tracing, dual HDB fan, ample display ports, and a 3 years warranty. Therefore, the Ryzen 7 3700X with EVGA GeForce RTX 2080 Super gives you the best CPU and GPU combo for 1440p ultra-wide gaming.

Buy CPU Here: Amazon

Buy GPU Here: Amazon

Intel Core i3-10100 with Asus GeForce GTX 1660 Super Overclocked

The recent COVID19 pandemic has skyrocketed GPU prices. So if you’re building a Gaming PC on a budget, this combination is for you. This combo lets you play at 1080p and 60fps at under the $600 range.

Intel’s Core Intel Core i3-10100 offers 4 cores and 8 threads just like its rival Ryzen 3 3100. However, it’s much cheaper. In fact, there’s a price difference of $80. We believe Hyperthreading in this series has made it an absolute screamer at this price point.

We can go with various GPU options, but we like GTX 1660 Super the most for 1080p gaming. It has more VRAM than its predecessor GTX 1650 and features more CUDA cores and 2 Gbps faster memory. With 8GB VRAM, you can turn your settings to the max and run the game in 60fps.

But that’s not all. The reason we love this little GPU is that it’s more power-efficient. At 125Watts, it’s more energy efficient than RX570. Is it a downside? No ray tracing. For the price, though, ASUS has got the best match for Intel Core i3-10100.

Buy CPU Here: Amazon

Buy GPU Here: Amazon

Buyer’s Guide – Getting the best CPU and GPU combo

Although CPU and GPU aren’t directly related, there can be some bottlenecks in performance. When choosing the right CPU GPU combination, keep these things in mind to get the most bang for the so-called buck.

Compatibility

Any software compatibility issues can be resolved with a systems upgrade. Hardware issues, on the other hand, can create a bottleneck. So if you’ve got a powerful CPU, make sure to pair an equally powerful graphics card with it for optimum performance. Generally, any GPU should fit your CPU as long as your motherboard has the right size slot and enough power supply.

Monitor Refresh Rate

In case your monitor has a triple-digit refresh rate, you should get a powerful CPU and GPU to utilize the potential fully. Conversely, if your monitor maxes out at 60 or 1080p, there’s no point paying the extra bucks. Otherwise, your high-end GPU will just push pixels quicker than the display can keep up. So what’s the point?

Power and Space

Does your case (and motherboard) has enough room for the CPU and GPU you’re considering? Secondly, can your power supply provide enough juice for your needs? You will also need the right type of power connectors, depending on the card.

Temperature

Both CPU and GPUs generate heat. You can go for an onboard cooling option in both cases, but such coolers are unreliable. Especially if you’re overclocking the CPU for 4k resolution (Yes, GPUs are also over-clockable (usually just 5-10% headroom at best), consider investing in a reliable cooler.

Final Thoughts

Ultimately, the best CPU and GPU combo is one that suits your needs and stays within your budget. While there are endless possibilities of combining these two components, the pandemic has taken a toll on GPU prices in particular. Therefore, these combinations give you the best value for the price you can spend. You can look for rebates (or used options) to save anywhere between $30 and $100. Besides, sometimes AAA games are also available with graphics cards. So, you can save an additional $70+ if you’re willing to wait for the right deal. That’s all for now. Thank you for reading!

]]>
Best GPUs on a budget https://linuxhint.com/best-budget-friendly-gpus/ Thu, 24 Dec 2020 19:15:09 +0000 https://linuxhint.com/?p=82885 Five years ago, graphics cards priced upwards of $400 were barely touching 60fps. Do you remember those days? Thanks to fierce competition between AMD and Nvidia, things have changed dramatically in the last two years. Although prices are on an upwards trajectory in the post-COVID era, you can still find some decent graphics cards today. These options can max out the latest games on 1080p or even 1440p resolutions at an affordable cost.

Of course, the best GPU on a budget for some people falls in the range of $100 to $150. While for others, it’s anywhere from $150 to $300. For the sake of clarity, this article terms any graphics card priced under $300 as a budget option. We list out the top 5 best GPUs on a budget. These options provide excellent value for every dollar spent. Plus, they are more than capable of playing games at 1080p with a smooth frame rate on medium settings.

So, without further ado, let’s dive right in!

1. AMD RX 580 – 8GB

Believe it or not, the impressive AMD RX 580 8GB is now up for grabs at under 240 bucks. It boasts 2 HDMI ports, 1 DVI-D port, and 2 DisplayPorts along with an 8GB of GDDR5 VRAM. In theory, it should hit 60fps in most games at 1080p resolution, making it a good option for console-quality graphics. Impressive, right?

As compared to the more popular GTX 1060 6GB, AMD RX 580-8GB performs much better in performance and still manages to cost less. Yes, there are some instances where the Nvidia GPU outperforms. However, that’s only noticeable in last generation DirectX 11 games. In the modern APIs and DirectX 12, the AMD silicon has the GeForce beat to the pulp.

So you take a step up a little in gaming resolution, better memory subsystem of the AMD RX 580 comes to the fore. With another 2GBs of GDDR5 VRAM and a broader 256-bit memory bus, the AMD RX 580 in 8GB trim is better capable of dealing with the rigors of high-resolution textures and pixel count.

Overall, it can handle most of the latest games on high to ultra-high settings. In rare cases, it even lets you play 1440p games. That’s why it’s the best-value gaming GPU in the market at the moment. However, it’s dual-fan design makes it slightly bigger in size.

Buy Here: Amazon

2. Nvidia GTX 1050 Ti – 4GB

The successor to Nvidia GTX 1050 is a powerful single fan GPU based on Nvidia’s Pascal architecture. It features a single HDMI port, a DVI-D, DisplayPort 1.4, and 4Gb of GDDR5 VRAM. In theory, it should also play all classic and modern games at 1080p and 60fps resolution. Although its 2GB VVRAM may create some problems!

It can be compared to AMD RX 580-4GB in performance, with both cards hitting the market only 6 months apart. In terms of size, however, the Nvidia card is tiny, virtually silent. The fact that it runs cool – even with a basic fan and heatsink – means that it can fit in small builds with no chance of reaching its thermal limits.

It has a 1290 MHz GPU speed. Overclocking is possible, of course. Moreover, with EVGA Precision X 16, you can add another 150 MHz to the core clock speed. At the same time, GDDR5 proves quite capable of +350MHz and more. Combined with increased power and thermal targets and higher fan speeds, push the GTX 1050 Ti performance up approximately 10 percent.

Another advantage for this Nvidia card over the RX 580-4 GB is that it uses half the power: 75W instead of 150W. What Nvidia has managed to get from the minuscule 75 Watts offered by the PCI Express power line is quite exceptional in this price range.

Buy Here: Amazon

3. AMD Radeon RX570 – 8GB

Coming in third place is another AMD GPU. It has a dual-fan design, an HDMI, a DVI, and 3 DisplayPort 1.4 for scalability. The RX 570-8GB utilizes AMD’s Polaris architecture that provides a more than decent boost over the aging RX 400 series. Nevertheless, it obviously cannot beat vastly superior (and expensive) Vega architecture.

That certainly doesn’t mean RX 570 is a slouch in performance. It packs a formidable 8 GB GDDR5 VRAM. So you should expect excellent performance in games like Fortnite, GTA5, and FarCry: New Dawn. Being an 8 GB package, you should be able to manage some of the 1440p @60fps gaming as well. Though the resolution may differ depending on the specs of the rest of your machine.

What’s more, the AMD Radeon RX570 comes with a built-in DirectX12 benchmark. It supports multi-GPU setups, dual bios, and features an XFX double dissipation cooling design.

The central unit runs at 1168 MHz speed. Overclocking boosts it to a 1244MHz frequency. Moreover, with a TDP rating of 120W, it needs at least a 400W PSU with a 6-pin connector. So if your power supply is weak, consider investing in a better power supply for this mean machine to work.

Buy Here: Amazon

4. AMD MSI Gaming Radeon RX 5500 XT – 4GB

If you don’t want to spend money on dated budget GPUs, then AMD’s Radeon RX 5500 XT may just be for you. It was dropped last year, targeting AAA gameplay at 1080p resolution. Display outs may vary depending on the vendor. Nevertheless, GPU offers at least one DVI, DisplayPort, and HDMI like the rest of the titles in this list.

Talking about specifications, it includes 4GB GDDR6 memory with an effective base clock of 1435 MHz and a 128-bit memory bus for 224 GB/s of bandwidth. They mention a 1717 MHz Game clock and 1,845 MHz Boost clock speed on the MSI website, though we couldn’t verify this claim.

A nice little touch is the dual onboard BIOS. You can select which BIOS you need via a switch on the top. However, the default BIOS automatically increases the power limit to 135W. The second BIOS comes auto-tuned for quieter operation, bringing the watt limit back to 120W. The card also depends on an 8-pin connector to juice it up, which seems slightly overkill.

That said, the Radeon RX 5500 XT introduced AMD’s latest RNDA architecture, trumping Nvidia’s RTX 2060 Super in performance by almost 10 percent at that time. Although it draws more juice from the power supply, it’s 1440p gameplay is much smoother and remains a viable alternative to ray tracing-capable yet pricey RTX 2060 Super.

Buy Here: Amazon

5. GTX 1650 Gaming X 4GB

In case your budget doesn’t allow GTX 1650 Super or 1660, but you still want an NVidia card, GTX 1650 Gaming X 4G is a good alternative. It replaces Nvidia GTX 1050 Ti and offers better performance for an additional $100 cost. It has 3 DisplayPorts, an HDMI 2.0B, and a 128-bit memory interface along with 4GB GDDR6 VRAM.

When compared to 1050 or 1050 Ti, 1650 has additional memory bandwidth and CUDA core. Secondly, it clocks in quite a bit higher. And thirdly, its Turing architecture provisions concurrent FP32 and INT calculations. That alone can boost performance up 10 to 30 percent over the Pascal architecture GPUs.

This card goes well with your G-Sync monitor and produces great frame rates in various eSports at 1080p @60fps resolution. Of course, it cannot compete with the AMD chips in the same price range, but it’s certainly more power-efficient and better factory overclock. Besides, it uses the tried and tested dual fan design.

All in all, the GTX 1650 Gaming X 4GB should run all the recent AAA titles at decent framerates. For more demanding titles like Metro Exodus, however, you may have to lower the card settings. As for the price, believe it or not, this card has almost doubled in price since its launch. That’s why it’s one of the more expensive ones on our list.

Buy Here: Amazon

Buyer’s Guide for the Best GPU on a Budget

When you are on a budget, consider these factors for the graphics card.

VRAM

VRAM is different from your system RAM. It’s a dedicated memory for your graphics card. As you may have noticed in our reviews, you need at least 4Gb of VRAM to run 1080p @ 60fps resolution smoothly. Anything low, and you run into the risk of not meeting the base memory requirements. In that case, you have to lower down the settings in your games to get good frame rates.

Cooling

Most low budget graphics cards have smaller fans and heatsinks. If you overclock them, they aren’t efficient in maintaining a low temperature. A card with bigger aluminum heatsinks, copper heat pipes, or a dual fan design is better at keeping the temperature down while overclocking. Moreover, make sure your system has enough ventilation, even if you are not overclocking.

Power

Different graphics cards have different TDP ratings. So check it out before the purchase. Make sure your PSU has enough wattage to provide the juice your card requires without any bottlenecks. Otherwise, you may have to replace the PSU. And in some cases, replacing PSU may cost you more than a budget GPU. If you are running multiple AIO coolers, fans, storage devices, and other peripherals, you will need extra power to fulfill your needs.

IO Connectors

Check the IO ports of the graphics card you are purchasing. It should have at least one DVI, HDMI, and DisplayPort to get that 144Hz refresh rate smoothness. Make sure both your card as well as the monitor support this connectivity to enjoy your gaming experience.

Final Thoughts

Our buyer’s guide for the best GPU on a budget ends here. A graphics card is the lifeline of your high-performance rig. So whether you’re going for an AMD or Nvidia model, make sure you have done your homework. All the options listed above are a great purchase in the low $300 budget range. Give them a shot, and we are sure you won’t be disappointed. That’s all for now. Till next time, folks!

]]>
Best CPUs and GPUs for Blender Rendering https://linuxhint.com/best_blender_rendering_cpus/ Tue, 27 Oct 2020 15:21:30 +0000 https://linuxhint.com/?p=74174

A blender is a versatile tool for 3D creation. Blender has an entire pipeline of 3D graphics and visual effects creation. A blender is a robust software for modeling, sculpting, shading, compositing, and animation. Astonishingly, you can get it without spending a single buck. Being open-source, it allows the developers to create add-ins and plugins which helps the hobbyists and professional users in creating 3D graphics. 3D visual effects and modeling have become significantly important in both high and low budget productions.Blender is incredible software if you have the right workstation. Blender viewport is not demanding but when it comes to rendering things get a little different. This article will cover the ins and out to develop a workstation for Blender.

Before building the workstation for Blender it is important to look that how the Blender uses the hardware.

Blender is capable of doing many things, It has dedicated tabs or modes for modeling, sculpting, animation, and shading, etc. The versatility of the software makes it quite complex to pick some specific items to build a workstation for Blender.

3D modeling is the most expensively used mode of Blender software. Blender divides the 3D modeling workload between CPU and GPU. For modifiers, shapes, and Python modules Blender usually uses CPUs, while for visual effects, geometry, and viewport rendering Blender tries to use GPU. This approach makes Blender quite flexible software, if you like low-poly 3D modeling then this configuration is perfectly fine, but for high-poly, Opensubdevision, and parametric 3D graphics you need a powerful workstation.

Sculpting mode in Blender is another hardware intensive mode. Sculpting needs a lot of RAM. Because complex sculpting requires millions of faces to process and needs a lot of memory.  If you are a sculpting enthusiast then an ordinary workstation would be of little use.

There are two rendering engines in Blender.

  • Cycles
  • Eevee

Cycles are a raytracing rendering engine and take a ridiculous amount of hardware power to render. If your workstation is not robust enough then it may take hours to render a simple scene. This rendering engine is demanding, but produce amazing results. If you need realistic output from Cycles rendering engine then a powerful workstation is needed. Cycles rendering engine is flexible and has the capability to run on CPU, GPU, and hybrid configuration (CPU+GPU).

Because of its realistic, high-quality results, cycles rendering engine is used in high budget productions like “Man in the castle” and “Next Gen”. If you have a powerful GPU then Cycles will render faster compared to CPU.

Eevee is a lightweight rendering engine introduced in Blender version 2.8. First of all, Eevee cannot be compared to Cycles, because this engine cannot achieve the fidelity of Cycles. It is a GPU based renderer and uses the PBR technique (Technique used in video game rendering).

In simple words, the Eevee is a high-quality viewport renderer and much faster compared to Cycles. If you are working in Eevee then it will work fine with mid-range GPUs.

We have seen the general features which require most of the hardware power, now we will take a look at specific items that need to build a robust workstation for Blender.  The most important hardware items are CPUs and GPUs.

CPU

Why are we concentrating on CPUs when GPUs give faster performance? CPUs are important because:

  • It can handle complex tasks, GPUs are designed to focus on one operation and large data associated with it. But on the other hand, CPUs are quite good at processing complex operations.
  • CPUs can manage large memories from 8GBs to 64GBs, which means you will never run short of memory while rendering, which is a common cause of software crashing. The GPUs come with limited memory, most of the GPUs available today have 12GB-24GB memory. Even in multiple GPU configurations, these memories do not combine.

CPUs are little complicated things to buy. You can choose to buy the most powerful CPU available in the market but ultimately your budget affects the final selection. What are the best CPUs available in the market? Let’s find out

AMD Ryzen 9 3950X

Ryzen 9 can be considered as the best CPU available in the market. It is available in 16 cores with an operating frequency of 3.7 GHz Frequency. Ryzen 9 comes with 32 threads and Blender’s Cycle renderer is good at utilizing threads.

Pros

  • Cost-effective
  • Power Efficient
  • Modest TDP

Cons

  • Single-core performance is unsatisfactory

Buy Now: Amazon

Intel Core i9 10900K

The second best processor for Blender is Intel’s Core i9 10900K. The total number of core in i9 is 10 which are a lot less the AMD’s Ryzen 9. The number of threads is 20. The operating frequency is 3.7 GHz can reach up to a maximum 5.7GHz.

Pro

  • Faster single-core performance
  • It can be overclocked
  • Great for gaming

Cons

  • Needs a new motherboard
  • Consumes more power

Buy Now:  Amazon

AMD Ryzen 9 3900XT

3900XT is another CPU from AMD with an operating frequency of 3.8GHz. It has 12 core and 24 threads. If you like gaming then this GPU is not the best choice.

Pros

  • Great single-core performance
  • Compatible with 3000 series motherboards

Cons

  • Does not come with a cooling fan
  • Not good for gaming

Buy Now: Amazon

Let’s have a look at some other choices as well

CPU Specs Pros and Cons Cost
AMD Ryzen 9 3950X Max-Min Frequency: 3.7–4.7GHz

L3 Cache: 64MB

Cores: 16

Threads:32

TDP: 105W

Good TDP and Power efficient but does not come with the cooler $709.99
Intel Core i9 10900K Max-Min Frequency: 3.7–5.3GHz

L3 Cache: 20MB

Cores: 10

Threads:20

TDP: 125W

It can be overclocked and best for gaming, but not power-efficient $598.88
AMD Ryzen 9 3900XT Max-Min Frequency: 3.7–4.8GHz

L3 Cache: 64MB

Cores: 12

Threads:24

TDP: 105W

Better single-core performance, not a good selection of you are a gamer $454.99
AMD Ryzen 7, 3800X Max-Min Frequency: 3.9–4.5GHz

L3 Cache: 32MB

Cores: 8

Threads:16

TDP: 105W

Decent multithreaded performance, no other significant improvement from its predecessor $339
Intel Core i7, 10700 Max-Min Frequency: 2.9–4.8GHz

L3 Cache: 16MB

Cores: 8

Threads:16

TDP: 65W

Excellent for gaming, power-efficient but low operating frequency $379.88
AMD Ryzen 5, 3600X Max-Min Frequency: 3.8–4.4GHz

L3 Cache : 32MB

Cores : 6

Threads :12

TDP: 95W

Budget-friendly comes with a cooler but does not support the latest motherboards $199

CPUs have their own importance and advantages but when it comes to speed CPUs cannot beat GPUs. Why we need GPUs for Blenders? Let’s find out

Faster

GPUs are a lot faster than CPUs when rendering, GPUs have more processing cores than CPUs. A GPU with decent memory can help a lot in Blender rendering, if you want to save time then having a good GPU will work for you.

GPUs are great for high-poly models. If your project contains a lot of complex geometry then GPU can help to speed up the rendering.

Nvidia GeForce RTX 2080 Ti

This GPU is best for blender and should be the top priority before making a workstation for Blender.

It has 4325 CUDA cores, with a 1350 MHz clock. It is an 11GB GPU.

Pros

  • Ray Tracing
  • 4K gaming

Cons

  • Expensive

Buy Now: Amazon

Nvidia GeForce RTX 2070 Super

This GPU has 2560 CUDA cores with 8GB of memory. This is the most compelling option if you don’t have a budget for RTX 2080 and 2080 Ti.

Pros

  • More Cores
  • Ray Tracing

Cons

  • A bit heavier

Buy Now: Amazon

Nvidia GeForce GTX 1650 Super

If you are tight on budget then GTX 1650 is for you. It has 896 cores with a 1485Mhz clock. Memory of 4GB

Pros

  • Affordable
  • Power-efficient

Cons

  • No big improvement in performance compared to predecessors

Buy Now: Amazon

There are some AMD graphical processing units also available. The issue with AMDs GPUs is that most of them do not support ray tracing. If you genuinely making a workstation for Blender then always go for Nvidia GeForce GPUs.

GPU Specs Pros and cons Cost
Nvidia GeForce RTX 2080 Ti Cores : 4325

Clock : 1545Mhz

Memory: 11GB

It has ray tracing and can be played games in 4k, but still very expensive $1899
Nvidia GeForce RTX 2070 Super Cores : 2560

Clock : 1770

Memory: 8GB

It also comes with ray tracing with a decent number of cores $587
Nvidia GeForce GTX 1650 Super Cores : 896

Clock : 1485

Memory: 4GB

It is affordable and very power efficient, but compared to predecessors there is not a big improvement $210

There is a range of CPUs and GPUs which can be selected to build a powerful workstation for Blender. But it all depends upon the budget. We discussed CPUs and GPUs of different ranges of prices. If you have a big budget then go for Ryzen 9 3900XT and Nvidia GeForce 2080Ti. But if there is a limit on a budget then go for an affordable CPU AMD Ryzen 5 3600X and Nvidia GeForce GTX 1650 Super GPU.

]]>
What is the Best Graphics Card for Deep Learning? https://linuxhint.com/best_graphics_cards_deep_learning/ Fri, 15 May 2020 07:29:29 +0000 https://linuxhint.com/?p=59973 If a CPU is the brain of a PC, then a GPU is the soul. While most PCs may work without a good GPU, deep learning is not possible without one. This is because deep learning requires complex operations like matrix manipulation, exceptional computational prerequisites, and substantial computing power.

Experience is vital to developing the skills necessary to apply deep learning to new issues. A fast GPU means a rapid gain in practical experience through immediate feedback. GPUs contain multiple cores to deal with parallel computations. They also incorporate extensive memory bandwidth to manage this information with ease.

Our top recommended pick for Best Graphics Card for Deep Learning is the Nvidia Geforce RTX 2080 Founders Edition. Buy it now for $1,940 USD on Amazon

With this in mind, we seek to answer the question, “What is the best graphics card for AI, machine learning and deep learning?” by reviewing several graphics cards currently available in 2021. Cards Reviewed:

  1. AMD RX Vega 64
  2. NVIDIA Tesla V100
  3. Nvidia Quadro RTX 8000
  4. GeForce RTX 2080 Ti
  5. NVIDIA Titan RTX

Below are the results:


AMD RX Vega 64

Radeon RX Vega 64

Features

  • Release Date: August 14, 2017
  • Vega Architecture
  • PCI Express Interface
  • Clock Speed: 1247 MHz
  • Stream Processors: 4096
  • VRAM: 8 GB
  • Memory Bandwidth: 484 GB/s

Review

If you do not like the NVIDIA GPUs, or your budget doesn’t allow you to spend upwards of $500 on a graphics card, then AMD has a smart alternative. Housing a decent amount of RAM, a fast memory bandwidth, and more than enough stream processors, AMD’s RS Vega 64 is very hard to ignore.

The Vega architecture is an upgrade from the previous RX cards. In terms of performance, this model is is close to the GeForce RTX 1080 Ti, as both of these models have a similar VRAM. Moreover, Vega supports native half-precision (FP16). The ROCm and TensorFlow work, but the software is not as mature as in NVIDIA graphics cards.

All in all, the Vega 64 is a decent GPU for deep learning and AI. This model costs well below $500 USD and gets the job done for beginners. However, for professional applications, we recommend opting for an NVIDIA card.

AMD RX Vega 64 Details: Amazon


NVIDIA Tesla V100

Tesla V100

Features:

  • Release Date: December 7, 2017
  • NVIDIA Volta architecture
  • PCI-E Interface
  • 112 TFLOPS Tensor Performance
  • 640 Tensor Cores
  • 5120 NVIDIA CUDA® Cores
  • VRAM: 16 GB
  • Memory Bandwidth: 900 GB/s
  • Compute APIs: CUDA, DirectCompute, OpenCL™, OpenACC®

Review:

The NVIDIA Tesla V100 is a behemoth and one of the best graphics cards for AI, machine learning, and deep learning. This card is fully optimized and comes packed with all the goodies one may need for this purpose.

The Tesla V100 comes in 16 GB and 32 GB memory configurations. With plenty of VRAM, AI acceleration, high memory bandwidth, and specialized tensor cores for deep learning, you can rest assured that your every training model will run smoothly – and in less time. Specifically, the Tesla V100 can deliver 125TFLOPS of deep learning performance for both training and inference [3], made possible by NVIDIA’s Volta architecture.

NVIDIA Tesla V100 Details: Amazon, (1)


Nvidia Quadro RTX 8000

Nvidia Quadro Rtx 8000

Features:

  • Release Date: August 2018
  • Turing Architecture
  • 576 Tensor Cores
  • CUDA Cores: 4,608
  • VRAM: 48 GB
  • Memory Bandwidth: 672 GB/s
  • 16.3 TFLOPS
  • System interface: PCI-Express

Review:

Specifically built for deep learning matrix arithmetic and computations, the Quadro RTX 8000 is a top-of-the-line graphics card. Since this card comes with large VRAM capacity (48 GB), this model is recommended for researching extra-large computational models. When used in pair with NVLink, the capacity can be increased to up to 96 GB of VRAM. Which is a lot!

A combination of 72 RT and 576 Tensor cores for enhanced workflows results in over 130 TFLOPS of performance. Compared to the most expensive graphics card on our list –  the Tesla V100 – this model potentially offers 50 percent more memory and still manages to cost less. Even on installed memory, this model has exceptional performance while working with larger batch sizes on a single GPU.

Again, like Tesla V100, this model is limited only by your price roof. That said, if you want to invest in the future and in high-quality computing, get an RTX 8000. Who knows, you may lead the research on AI. Tesla V100 is based on Turing architecture where the V100 is based on Volta architecture, so Nvidia Quadro RTX 8000 can be considered slightly more modern and slightly more powerful than the V100.

Nvidia Quadro RTX 8000 Details: Amazon


GeForce RTX 2080 Ti

Geforce RTX 2080 Founders Edition

Features:

  • Release Date: September 20, 2018
  • Turing GPU architecture and the RTX platform
  • Clock Speed: 1350 MHz
  • CUDA Cores: 4352
  • 11 GB of next-gen, ultra-fast GDDR6 memory
  • Memory Bandwidth: 616 GB/s
  • Power: 260W

Review:

The GeForce RTX 2080 Ti is a budget option ideal for small-scale modeling workloads, rather than large-scale training developments. This is because it has a smaller GPU memory per card (only 11 GB). This model’s limitations become more obvious when training some modern NLP models. However, that does not mean that this card cannot compete. The blower design on the RTX 2080 allows for far denser system configurations – up to four GPUs within a single workstation. Plus, this model trains neural networks at 80 percent the speeds of the Tesla V100. According to LambdaLabs’ deep learning performance benchmarks, when compared with Tesla V100, the RTX 2080 is 73% the speed of FP2 and 55% the speed of FP16.

Meanwhile, this model costs nearly 7 times less than a Tesla V100. From both a price and performance standpoint, the GeForce RTX 2080 Ti is a great GPU for deep learning and AI development.

GeForce RTX 2080 Ti Details: Amazon


NVIDIA Titan RTX

NVIDIA Titan RTX Graphics

Features:

  • Release Date: December 18, 2018
  • Powered by NVIDIA Turing™ architecture designed for AI
  • 576 Tensor Cores for AI acceleration
  • 130 teraFLOPS (TFLOPS) for deep learning training
  • CUDA Cores: 4608
  • VRAM: 24 GB
  • Memory Bandwidth: 672 GB/s
  • Recommended power supply 650 watts

Review:

The NVIDIA Titan RTX is another mid-range GPU used for complex deep learning operations. This model’s 24 GB of VRAM is enough to work with most batch sizes. If you wish to train larger models, however, pair this card with the NVLink bridge to effectively have 48 GB of VRAM. This amount would be enough even for large transformer NLP models. Moreover, Titan RTX allows for full rate mixed-precision training for models (i.e., FP 16 along with FP32 accumulation). As a result, this model performs approximately 15 to 20 percent faster in operations where Tensor Cores are utilized.

One limitation of the NVIDIA Titan RTX is the twin fan design. This hampers more complex system configurations because it cannot be packed into a workstation without substantial modifications to the cooling mechanism, which is not recommended.

Overall, Titan is an excellent, all-purpose GPU for just about any deep learning task. Compared to other general purpose graphics cards, it is certainly expensive. That is why this model is not recommended for gamers. Nevertheless, extra VRAM and performance boost would likely be appreciated by researchers utilizing complex deep learning models. The price of the Titan RTX is meaningfully less than the V100 showcased above and would be a good choice if your budget does not allow for V100 pricing to do deep learning or your workload does not need more than the Titan RTX (see interesting benchmarks)

NVIDIA Titan RTX Details: Amazon


Choosing the best graphics card for AI, machine learning, and deep learning

AI, machine learning, and deep learning tasks process heaps of data. These tasks can be very demanding on your hardware. Below are the features to keep in mind before purchasing a GPU.

Cores

As a simple rule of thumb, the greater the number of cores, the higher will be the performance of your system. The number of cores should also be taken into consideration, particularly if you are dealing with a large amount of data. NVIDIA has named its cores CUDA, while AMD calls their cores stream processors. Go for the highest number of processing cores your budget will allow.

Processing Power

The processing power of a GPU depends on the number of cores inside the system multiplied by the clock speeds at which you are running the cores. The higher the speed and the higher the number of cores, the higher will be the processing power at which your GPU can compute data. This also determines how fast your system will perform a task.

VRAM

Video RAM, or VRAM, is a measurement of the amount of data your system can handle at once. Higher VRAM is vital if you are working with various Computer Vision models or performing any CV Kaggle competitions. VRAM is not as important for NLP, or for working with other categorical data.

Memory Bandwidth

The Memory Bandwidth is the rate at which data is read or stored into the memory. In simple terms, it is the speed of the VRAM. Measured in GB/s, more Memory Bandwidth means that the card can draw more data in less time, which translates into faster operation.

Cooling

GPU temperature can be a significant bottleneck when it comes to performance. Modern GPUs increase their speed to a maximum while running an algorithm. But as soon as a certain temperature threshold is reached, the GPU decreases processing speed to protect against overheating.

The blower fan design for air coolers pushes air outside of the system while the non-blower fans suck air in. In architecture where multiple GPUs are placed next to each other, non-blower fans will heat up more. If you are using air cooling in a setup with 3 to 4 GPUs, avoid non-blower fans.

Water cooling is another option. Though expensive, this method is much more silent and ensures that even the beefiest GPU setups remain cool throughout operation.

Conclusion

For most users foraying into deep learning, the RTX 2080 Ti or the Titan RTX will provide the greatest bang for your buck. The only drawback of the RTX 2080 Ti is a limited 11 GB VRAM size. Training with larger batch sizes allows models to train faster and much more accurately, saving a lot of the user’s time. This is only possible when you have Quadro GPUs or a TITAN RTX. Using half-precision (FP16) allows models to fit in the GPUs with insufficient VRAM size [2]. For more advanced users, however, Tesla V100 is where you should invest. That is our top pick for the best graphics card for AI, machine learning and deep learning. That is all for this article. We hope you liked it. Until next time!

References

  1. Best GPUs For AI, Machine Learning and Deep Learning in 2020
  2. Best GPU for deep learning in 2020
  3. NVIDIA AI INFERENCE PLATFORM: Giant Leaps in Performance and Efficiency for AI Services, from the Data Center to the Network’s Edge
  4. NVIDIA V100 TENSOR CORE GPU
  5. Titan RTX Deep Learning Benchmarks
]]>
How to Mine Cryptocurrency with GPU Mining Rigs https://linuxhint.com/mine-cryptocurrency-gpu-mining-rigs/ Mon, 29 Jan 2018 19:00:32 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=21935

As you may already know, it’s no longer possible to mine Bitcoin and many other cryptocurrencies using a CPU (Central Processing Unit) and make profit because ASICs (Application-Specific Integrated Circuit) have taken over. But there are just as many cryptocurrencies that are resistant to ASIC mining, including Ethereum, Ubiq, and Zcash, and such cryptocurrencies can be mined with GPU (Graphics Processing Unit) mining rigs.

What Are GPU Mining Rigs?

A GPU mining rig is a special computer put together for the sole purpose of mining cryptocurrencies using GPUs. As such, a GPU mining rig can look like a regular personal computer, have the same hardware components as a regular personal computer, and even run the same operating system as a regular personal computer.

However, most GPU mining rigs have more than one GPU (sometimes as many as 6 or 8 or even more), and they are housed inside special mining cases optimized for efficient cooling. It’s also typical for a mining rig to have more than one PSU (Power Supply Unit) because the total power consumption of a GPU mining rig often exceeds 1,000 watts.

How Much Money Can I Make?

How much money you can make with a GPU mining rig depends on the hardware configuration of your rig, the price of the individual components, your cost of electricity, and which cryptocurrencies you decide to mine with your rig.

We recommend you use this online mining calculator to calculate which cryptocurrencies are the most profitable to mine. Typically, Ethereum is at the very top, followed by Ubiq and then Zcash.

What Should I Buy?

To build a GPU mining rig, you will need to purchase several hardware components. Start by deciding how many GPUs you would like your mining rig to have. If only two, any regular desktop PC case will do. If more, then you’ll need a special mining case, such as this aluminum stackable open mining case for up to 8 GPUs.

Right now, the best GPUs for mining in terms of value are the AMD RX 480 and the Nvidia GTX 1070 Founders Edition. Again, use the online mining calculator recommended above to calculate which of the two GPUs is more profitable for your cryptocurrency of choice.

You’ll also need a motherboard with enough PCI Express connectors for all your GPUs. There are now special mining motherboards, such as the ASRock H110 Pro BTC+ (up to 13 GPUs) or the ASUS B250 Mining Expert (up to 19 GPUs), but you can also use a regular desktop motherboard, such as the ASUS PRIME Z270-A.

In most cases, you won’t plug your GPUs directly into the motherboard. Instead, you will use PCI Express risers so that you can mount your GPUs on rails for better cooling and as a way how to overcome space restrictions. A pro tip: use hot glue to secure your PCI Express risers and the cables that connect to them.

The PSU of choice for most miners is the EVGA 1000 GQ, which is an 80 PLUS Gold certified power supply with heavy-duty protections and a large and quiet fan with fluid dynamic bearings.

Other hardware components, such as the CPU and RAM, are not too important, and you shouldn’t feel bad if you decide to save money on them. Any modern Intel Pentium CPU, such as the G4560, should work fine. Just make sure that it’s compatible with your motherboard.

How Do I Start?

With your GPU mining rig ready to go, we recommend you buy and set up the ethOS 64-bit Linux mining distribution, which supports Ethereum, Zcash, Monero, and other GPU-minable coins. Of course, you can get by without a specialized mining operating system, but we guarantee that ethOS will pay for itself multiple times in the long run.

ethOS supports up to 16 AMD RX or Nvidia GPUs, it can automatically assign IP address and hostname, has built-in GPU overheat protection, features automatic reporting and remote configuration, and, above all, is extremely lightweight and works with all CPUs made in the last 5 generations on only 2 GB of RAM.

]]>
GPU Programming Introduction https://linuxhint.com/gpu-programming/ Mon, 18 Dec 2017 16:40:42 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20847

General-purpose computing on a GPU (Graphics Processing Unit), better known as GPU programming, is the use of a GPU together with a CPU (Central Processing Unit) to accelerate computation in applications traditionally handled only by the CPU.Even though GPU programming has been practically viable only for the past two decades, its applications now include virtually every industry. For example, GPU programming has been used to accelerate video, digital image, and audio signal processing, statistical physics, scientific computing, medical imaging, computer vision, neural networks and deep learning, cryptography, and even intrusion detection, among many other areas.

This article serves as a theoretical introduction aimed at those who would like to learn how to write GPU-accelerated programs as well as those who have just a general interest in this fascinating topic.

The Difference Between a GPU and a CPU

A long time before high-resolution, high-fidelity 3D graphics became the norm, most computers had no GPU. Instead, the CPU carried out all the instructions of computer programs by performing the basic arithmetic, logical, control, and input/output (I/O) operations specified by the instructions. For this reason, the CPU is often described as the brain of the computer.

But in recent years, the GPU, which is designed to accelerate the creation of images for output to a display device, has often been helping the CPU solve problems in areas that were previously handled solely by the CPU.

Graphics card manufacturer Nvidia provides a simple way how to understand the fundamental difference between a GPU and a CPU: “A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.”

The ability to handle multiple tasks at the same time makes GPUs highly suitable for some tasks, such as searching for a word in a document, while other tasks, such as calculating the Fibonacci sequence, don’t benefit from parallel processing at all.

However, among the tasks that do significantly benefit from parallel processing is deep learning, one of the most highly sought-after skills in tech today. Deep learning algorithms mimic the activity in layers of neurons in the neocortex, allowing machines to learn how to understand language, recognize patterns, or compose music.

As a result of the growing importance of artificial intelligence, the demand for developers who understand general-purpose computing on a GPU has been soaring.

CUDA Versus OpenCL Versus OpenACC

Because GPUs understand computational problems in terms of graphics primitives, early efforts to use GPUs as general-purpose processors required reformulating computational problems in the language of graphics cards.

Fortunately, it’s now much easier to do GPU-accelerated computing thanks to parallel computing platforms such as Nvidia’s CUDA, OpenCL, or OpenACC. These platforms allow developers to ignore the language barrier that exists between the CPU and the GPU and, instead, focus on higher-level computing concepts.

CUDA

Initially released by Nvidia in 2007, CUDA (Compute Unified Device Architecture) is the dominant proprietary framework today. “With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs,” describes the framework Nvidia.

Developers can call CUDA from programming languages such as C, C++, Fortran, or Python without any skills in graphics programming. What’s more, the CUDA Toolkit from Nvidia contains everything developers need to start creating GPU-accelerated applications that greatly outperform their CPU-bound counterparts.

The CUDA SDK is available for Microsoft Windows, Linux, and macOS. the CUDA platform also supports other computational interfaces, including the OpenCL, Microsoft’s DirectCompute, OpenGL Compute Shaders, and C++ AMP.

OpenCL

Initially released by the Khronos Group in 2009, OpenCL is the most popular open, royalty-free standard for cross-platform, parallel programming. According to the Khronos Group, “OpenCL greatly improves the speed and responsiveness of a wide spectrum of applications in numerous market categories including gaming and entertainment titles, scientific and medical software, professional creative tools, vision processing, and neural network training and inferencing.”

OpenCL has so far been implemented by Altera, AMD, Apple, ARM, Creative, IBM, Imagination, Intel, Nvidia, Qualcomm, Samsung, Vivante, Xilinx, and ZiiLABS, and it supports all popular operating systems across all major platforms, making it extremely versatile. OpenCL defines a C-like language for writing programs, but third-party APIs exist for other programming languages and platforms such as Python or Java.

OpenACC

OpenACC is the youngest programming standard for parallel computing described in this article. It was initially released in 2015 by a group of companies comprising Cray, CAPS, Nvidia, and PGI (the Portland Group) to simplify parallel programming of heterogeneous CPU/GPU systems.

“OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model.,” states OpenACC on its official website.

Developers interested in OpenACC can annotate C, C++, and Fortran source code to tell the GPU which areas that should be accelerated. The goal is to provide a model for accelerator programming that is portable across operating systems and various types of host CPUs and accelerators.

Which One Should I Use?

The choice between these three parallel computing platforms depends on your goals and the environment you work in. For example, CUDA is widely used in academia, and it’s also considered to be the easiest one to learn. OpenCL is by far the most portable parallel computing platform, although programs written in OpenCL still need to be individually optimized for each target platform.

Learn GPU Coding on LinuxHint.com

GPU Programming with Python

GPU Programming with C++

Further Reading

To become familiar with CUDA, we recommend you follow the instructions in the CUDA Quick Start Guide, which explains how to get CUDA up and running on Linux, Windows, and macOS. AMD’s OpenCL Programming Guide provides a fantastic, in-depth overview of OpenCL, but it assumes that the reader is familiar with the first three chapters of the OpenCL Specification. OpenACC offers a three-step introductory tutorial designed to demonstrate how to take advantage of GPU programming, and more information can be found in the OpenACC specification.

]]>
GPU Programming with Python https://linuxhint.com/gpu-programming-python/ Sat, 02 Dec 2017 11:13:17 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20457 In this article, we’ll dive into GPU programming with Python. Using the ease of Python, you can unlock the incredible computing power of your video card’s GPU (graphics processing unit). In this example, we’ll work with NVIDIA’s CUDA library.

Requirements

For this exercise, you’ll need either a physical machine with Linux and an NVIDIA-based GPU, or launch a GPU-based instance on Amazon Web Services. Either should work fine, but if you choose to use a physical machine, you’ll need to make sure you have the NVIDIA proprietary drivers installed, see instructions: https://linuxhint.com/install-nvidia-drivers-linux

You’ll also need the CUDA Toolkit installed. This example uses Ubuntu 16.04 LTS specifically, but there are downloads available for most major Linux distributions at the following URL: https://developer.nvidia.com/cuda-downloads

I prefer the .deb based download, and these examples will assume you chose that route. The file you download is a .deb package but doesn’t have a .deb extension, so renaming it to have a .deb at the end his helpful. Then you install it with:

sudo dpkg -i package-name.deb

If you are prompted about installing a GPG key, please follow the instructions given to do so.

Now you’ll need to install the cuda package itself. To do so, run:

sudo apt-get update
sudo apt-get install cuda -y

This part can take a while, so you might want to grab a cup of coffee. Once it’s done, I recommend rebooting to ensure all modules are properly reloaded.

Next, you’ll need the Anaconda Python distribution. You can download that here:  https://www.anaconda.com/download/#linux

Grab the 64-bit version and install it like this:

sh Anaconda*.sh

(the star in the above command will ensure that the command is ran regardless of the minor version)

The default install location should be fine, and in this tutorial, we’ll use it. By default, it installs to ~/anaconda3

At the end of the install, you’ll be prompted to decide if you wish to add Anaconda to your path. Answer yes here to make running the necessary commands easier. To ensure this change takes place, after the installer finishes completely, log out then log back in to your account.

More info on Installing Anaconda: https://linuxhint.com/install-anaconda-python-on-ubuntu/

Finally we’ll need to install Numba. Numba uses the LLVM compiler to compile Python to machine code. This not only enhances performance of regular Python code but also provides the glue necessary to send instructions to the GPU in binary form. To do this, run:

conda install numba

Limitations and Benefits of GPU Programming

It’s tempting to think that we can convert any Python program into a GPU-based program, dramatically accelerating its performance. However, the GPU on a video card works considerably differently than a standard CPU in a computer.

CPUs handle a lot of different inputs and outputs and have a wide assortment of instructions for dealing with these situations. They also are responsible for accessing memory, dealing with the system bus, handling protection rings, segmenting, and input/output functionality. They are extreme multitaskers with no specific focus.

GPUs on the other hand are built to process simple functions with blindingly fast speed. To accomplish this, they expect a more uniform state of input and output. By specializing in scalar functions. A scalar function takes one or more inputs but returns only a single output. These values must be types pre-defined by numpy.

Example Code

In this example, we’ll create a simple function that takes a list of values, adds them together, and returns the sum. To demonstrate the power of the GPU, we’ll run one of these functions on the CPU and one on the GPU and display the times. The documented code is below:

import numpy as np
from timeit import default_timer as timer
from numba import vectorize

# This should be a substantially high value. On my test machine, this took
# 33 seconds to run via the CPU and just over 3 seconds on the GPU.
NUM_ELEMENTS = 100000000

# This is the CPU version.
def vector_add_cpu(a, b):
  c = np.zeros(NUM_ELEMENTS, dtype=np.float32)
  for i in range(NUM_ELEMENTS):
    c[i] = a[i] + b[i]
  return c

# This is the GPU version. Note the @vectorize decorator. This tells
# numba to turn this into a GPU vectorized function.
@vectorize(["float32(float32, float32)"], target='cuda')
def vector_add_gpu(a, b):
  return a + b;

def main():
  a_source = np.ones(NUM_ELEMENTS, dtype=np.float32)
  b_source = np.ones(NUM_ELEMENTS, dtype=np.float32)

  # Time the CPU function
  start = timer()
  vector_add_cpu(a_source, b_source)
  vector_add_cpu_time = timer() - start

  # Time the GPU function
  start = timer()
  vector_add_gpu(a_source, b_source)
  vector_add_gpu_time = timer() - start

  # Report times
  print("CPU function took %f seconds." % vector_add_cpu_time)
  print("GPU function took %f seconds." % vector_add_gpu_time)

  return 0

if __name__ == "__main__":
  main()

To run the example, type:

python gpu-example.py

NOTE: If you run into issues when running your program, try using “conda install accelerate”.

As you can see, the CPU version runs considerably slower.

If not, then your iterations are too small. Adjust the NUM_ELEMENTS to a larger value (on mine, the breakeven mark seemed to be around 100 million). This is because the setup of the GPU takes a small but noticeable amount of time, so to make the operation worth it, a higher workload is needed. Once you raise it above the threshold for your machine, you’ll notice substantial performance improvements of the GPU version over the CPU version.

Conclusion

I hope you’ve enjoyed our basic introduction into GPU Programming with Python. Though the example above is trivial, it provides the framework you need to take your ideas further utilizing the power of your GPU.

]]>
Install Nvidia Drivers on CentOS https://linuxhint.com/install-nvidia-drivers-centos/ Fri, 10 Nov 2017 08:19:03 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=19959 Install Nvidia Optimus Graphics Drivers on CentOS 7

In this article, I will show you how to set up new Nvidia Optimus supported graphics card in hybrid graphics configuration on CentOS 7. All the new laptops/notebooks these days use these type of configuration. So it’s very common these days. I used ASUS UX303UB with 2GB Nvidia GeForce 940M and Intel HD Graphics 520 to test everything of this article. Let’s get started.

This article is only for Nvidia Optimus supported graphics card or hybrid graphics configuration. You can check whether your Nvidia graphics card supports Optimus technology or not with the following command:

“lspci | grep ‘NVIDIA\|VGA’”

If you have 2 graphics card listed, as it is in the screen shot, you can follow this article and expect everything to work.

Note: Before proceeding with the installation, turn off Secure Boot from BIOS settings.

First, we have to add some package repositories to our CentOS 7 operating system. Run the following commands to add the package repositories:

Add elrepo repository:

sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
sudo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0.3.el7.elrepo.noarch.rpm

Add epel repository:

sudo yum install epel-release

Add bumblebee repository:

sudo yum -y --nogpgcheck install 
http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee/rhel7/noarch/bumblebee-release-1.2-1.noarch.rpm

sudo yum -y --nogpgcheck install 
http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee-nonfree/rhel7/noarch/bumblebee-nonfree-release-1.2-1.noarch.rpm

 

Now we have to update the kernel of CentOS 7. Otherwise it will not work.

sudo yum --enablerepo=elrepo-kernel install kernel-ml

Install new kernel development package with the following command:

sudo yum --enablerepo=elrepo-kernel install kernel-ml-devel

As you see, I’ve already done that and restarted my computer. From the ‘uname -r’ command, you can see that my kernel version is now ‘4.13.11’. You should restart your computer after installing a new kernel and kernel-ml-devel package.

Now we are ready to install bumblebee Nvidia Optimus drivers. To install bumblebee, run the following command:

sudo yum install bumblebee-nvidia bbswitch-dkms primus kernel-devel

Or the following command, if you want 32-bit compatibility:

sudo yum install bumblebee-nvidia bbswitch-dkms VirtualGL.x86_64 VirtualGL.i686 primus.x86_64 primus.i686 kernel-devel

I will go with the first command.

Once you run the command, press ‘y’ and then press <Enter> to confirm the installation.

 

Your installation should start. It may take several minutes to finish.

Once installed, run the following command to add your user to the bumblebee group.

sudo usermod -aG bumblebee YOUR_USERNAME

Now restart your computer. Once your computer restarts, you should be able to run “Nvidia Settings” control panel. It verifies that everything is working correctly.

 

You can check if everything is working correctly from the command line as well. Run the following command to check if Nvidia driver and bumblebee is working:

bumblebee-nvidia --check

From the following output, you can see that everything is working correctly.

If you have any problem, you should try running the following command:

sudo bumblebee-nvidia --debug --force

If you want to uninstall Nvidia Optimus drivers by Bumblebee, run the following command:

sudo yum remove bumblebee-nvidia bbswitch-dkms primus kernel-devel

Press ‘y’ and press . Bumblebee Nvidia Optimus drivers should be removed.

 

 

You can also remove the updated kernel with the following commands:

sudo yum remove kernel-ml kernel-ml-dev

Although, removing the kernel is not required. You can use them if you want.
So that’s how you install and uninstall the new Nvidia Optimus drivers on CentOS 7. Thanks for reading this article.

]]>
Best Graphics Card for Linux https://linuxhint.com/best-graphics-card-for-linux/ https://linuxhint.com/best-graphics-card-for-linux/#respond Mon, 02 Oct 2017 23:03:36 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=19602 A Graphics card is a hardware device that is used to render graphical elements to send to the screen for display purpose. The market for graphics cards is quite diverse, and each one is manufactured balancing the price and performance, and therefore it might be a difficult task to figure out which one has the most cost to benefit ratio. So this article demonstrates the factors to be considered when buying a graphics card, currently available and popular graphics card that are still supported by the manufacturers, and how to pick the best graphics card for your need.

Available and Popular Graphics Cards

It’s a difficult task to list out every graphics card available in the market due to the extreme diverse nature in the graphics card world, but as a rule of thumb it’s better to stick to a fairly recent graphics card instead of older ones due to the implicit support cycle of the manufacturer. When a graphics card is out of this support cycle, it’s no longer supported by the manufacturer; hence new drivers which are required to get optimal performance no longer cover it.

A Driver is a piece of low level codes, which are required for the graphics card to function. Unlike CPUs, graphics cards constantly require updates for getting consistent performance. This is not really necessary for all the applications, but for certain high performance applications such as video games or graphics design software make the need for constant updates important. At the moment the following graphics card list is ideal as they are fairly new, and are still supported by the manufacturer.

NVidia

  • GeForce Titan XP
  • GeForce GTX 1080 TI
  • GeForce GTX 1080
  • GeForce GTX 1070
  • GeForce GTX 1060
  • GeForce GTX 1050 TI
  • GeForce GTX 1050
  • GeForce GTX 980 TI
  • GeForce GTX 980
  • GeForce GTX 970
  • GeForce GTX 960
  • GeForce GTX 950

AMD

  • Radeon Pro Duo
  • Radeon RX 480
  • Radeon RX 470
  • Radeon RX 460
  • R9 295X2
  • R9 290X
  • R9 290
  • R9 285
  • R9 280X
  • R9 280
  • R9 270X
  • R9 270
  • R9 390X
  • R9 390
  • R9 380X
  • R9 380
  • R9 FURY X
  • R9 FURY
  • R9 NANO

Price Comparison

Usually a graphics card is the most expensive device in a computer system. This is because it costs a lot of money to research, and manufacture a GPU (graphics processing unit) which is basically the brain of any graphics card. Plus, external factors like high demand also contributes to this price surge. So, the manufacturer tends to cover the manufacturing cost by giving it an expensive price tag. This is quite obvious as the price of any graphics card is quite expensive when it’s just come out of the factory, and therefore it’s better to skip the latest generation and focus on the previous generation graphics card for getting best cost to benefit ratio.

Price also depends on the board manufacturer, the region, and the variety as well. In the United States the graphics cards are relatively cheaper than in Latin America and Asia, and Asus, EVGA tend to manufacture expensive boards, whereas Palit is known to manufacture cheaper boards, and therefore Asian users should consider to go with Palit if price matters.

Even though the GPU manufacturer (either NVidia or AMD) releases one or a few GPU(s) per model, the board manufacturers often tweak its specifications, and produce more varieties. So there could be literally a dozen of them with different price tags but with the same GPU, for instance Nvidia only released GeForce 1080 and its TI version, but EVGAS has 31 varieties of these two with different specifications, and subtle features.  Looking at the prices of some of the graphics cards on Amazon, prices range anywhere from $100 to $2000 so you can see its quite a range.

Driver Support

Both Nvidia and AMD support Linux distributors, and the drivers are available to download at the respective manufacturer’s website’s download section. To find out the currently installed graphics adapter in the system, the following command can be used in terminal window, which then returns the adapter’s name and some other useful information.

lspci -k | grep -A 2 -E "(VGA|3D)"


Performance Comparison

The performance not only depends on the GPU (graphics processing unit), but also the version of the same GPU. As explained previously certain GPUs have multiple versions depending on the board manufacturer, for instance EVGA has 1070 has 13 varieties with different frequencies, and extra features like a water-cooling unit, RGB led.

If price is the main concern then it’s better to ignore all the subtle features and target the lowest end of any model, for instance EVGA GeForce GTX 1060 has two main versions – 3GB and 6GB, both have the same performance, but different video memory which is necessary for certain applications like video rendering, 3d graphics rendering, high resolution video gaming, but 3GB is still enough for most of the applications out there under 1080p resolution.

Nowadays a high end video game requires from 2GB to 6GB video ram depending on the monitor resolution. If the monitor resolution is anything under 1080p then 2GB is sufficient, but if it’s at or above 1080p then at least 3GB video ram is required the game to perform smoothly. However, there are not much high end games for Linux, and therefore this is the least concern.

For general usage like watching movies, playing casual games even an onboard video adapter is more than enough. So if there is a fairly new Intel processor in the system, a separate graphics adapter is not really necessary.

For resource hogging software like Autodesk Maya, it’s much better to have a high end graphics card like GeForce Titan or 1080 or Radeon RX 480 or Radeon RX 470, but professional artists tend to use NVidia Quadro or AMD FirePro due to their tremendous horsepower, but they are quite expensive for an average home user.

Monitor Resolution

Monitor resolution increases the need for a high performance graphics card, because the number of pixels proportionally increases the need for processing power, and video ram. A video game which runs fairly well on a 720p monitor may not run with the same frame rate on 1080p, if the video card isn’t powerful enough to process all the pixels displayed on the screen. So it’s advisable to check the system requirement of the application before purchasing a graphics card. Usually an application states two varieties of system requirements – minimum, and recommended. For getting optimal performance in the application the system should have the graphics card stated in the recommended side.

Power Supply

Power Supply is the most important accessory to be taken into the consideration when buying a graphics card. A power supply, also known as PSU supplies the power to the entire system unit, and its value is measured in Watt (W). Usually a graphics card drags more power out of it than any other device; hence a power supply should contain a hefty wattage value.

That being said, requirement of the wattage value depends on the graphics card, and the amount of graphics cards attached to the system. More powerful graphics cards like GeForce Titan usually demands at least 600W, but contemporary Titan cards require at least 750W, if there are more than that it increases the wattage value even further propositionally to the number of graphics cards. However, cheap graphics cards like GeForce 1060 only needs 450W which is the wattage value of a standard power supply unit.

If there is already a power supply, then it’s recommended to try a graphics card of or above GeForce 1000 series as they tend to offer best wattage value to performance ratio, for instance GeForce 1050 just requires a 300 Watt PSU, whereas its counterpart Radeon RX 460 requires at least 350W to 400W depending on the board manufacturer.

If a power supply isn’t already there, then it’s recommended to target at least 600W to 700W as that’s the recommended values for mid-range to high end graphics cards, but more wattage value may require if more peripheral devices are connected to the computer. In such cases the value can be calculated with the help of a power consumption calculator like this.

Conclusion

A good graphics card will really help with end visualizations, gaming, and animation rendering, but also its not worth spending too much if your Linux system does not have the expected usage to stress the graphics part of the system.

GPU Passthrough

]]>
https://linuxhint.com/best-graphics-card-for-linux/feed/ 0
How to install digiKam 5.6.0 released on Ubuntu 17.04 & Ubuntu 16.10 https://linuxhint.com/install-digikam-photo-management-linux/ https://linuxhint.com/install-digikam-photo-management-linux/#respond Tue, 27 Jun 2017 11:33:11 +0000 http://sysads.co.uk/?p=16900 digiKam 5.6.0 recently released, is an advanced digital photo management application for KDE Linux, which makes importing as well as organizing digital photos a “snap”. The photos are organized in albums which can be sorted chronologically, by folder layout or by custom collections. Before we proceed on how to install digikam 5.6.0 on Ubuntu, lets take a quick look at what this release addresses.

install digikam

digiKam 5.6.0 Highlight

  • The HTML gallery and the video slideshow tools versions back
  • The database shrinking (e.g. purging stale thumbnails) is now also supported on MySQL
  • Improvement made on the Grouping items feature
  • Geolocation bookmarks introduce fixes to be fully functional with bundles
  • It now provides support for custom sidecars, as well as custom sidecars type-mime
  • As well as lots of bug has been fixed

The next main digiKam version 6.0.0 is planned for the end of this year, when all Google Summer of Code projects will be ready to be backported for a beta release. In September, we will release a maintenance version 5.7.0 with a set of bugfixes as usual.

For more information about 5.6.0, take a look at the list of more than 81 issues closed in Bugzilla

How to install digiKam 5.6.0 on Ubuntu 17.04

sudo apt-add-repository ppa:philip5/extra

sudo apt-get update && sudo apt-get install digikam

How to install digiKam 5.6.0 on Ubuntu 16.10

sudo apt-add-repository ppa:philip5/extra

sudo apt-get update && sudo apt-get install digikam5

How to uninstall digiKam 5.6.0 from Ubuntu

sudo apt remove digikam5 && sudo apt autoremove
]]>
https://linuxhint.com/install-digikam-photo-management-linux/feed/ 0