

Better calculators just use floating point math with a few tricks on top to pretend it isn’t floating point math.
e
Better calculators just use floating point math with a few tricks on top to pretend it isn’t floating point math.
The UI looks the same lol
The layers are the big thing, but its hard to show because the final result looks the same anyways
Yea I edited to say RISC-V specifically, thx
In theory it should be able to be more power efficient. In practice, less development has been put into RISC-V CPU designs so they are still less power efficient than Arm (and maybe x86 even)
Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great
And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.
It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance
Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower
Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.
I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.
It’s hard to say. “Open core” means that most of the software is open source (licenses vary) but some features are locked behind a paywall. Gitlab takes this approach for example, also maybe onlyoffice.
There are some pretty corporate “open core” software companies tho, that’s a more grey area
You could keep the kernel tho while changing the gui
The b580 is pretty fast with RT, it beats the price comparable Nvidia gpus
With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.
Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)
See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.
You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.
Yes, the game should account for latency as much as it can, so a conscious decision to lead or trail probably won’t help. It’s more useful for debugging sort of purposes imo, like figuring out if your network is slow or if it’s just the person you’re playing against.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
Honestly, for how many tech/open source people are on lemmy its surprising how few community contributions there are to the actual software.
With the data from https://lemmy.fediverse.observer/list,
Honestly neither of these give a very good impression of where users are from, but I don’t think lemmy collects that data. Maybe if there was some way to check which languages people have listed?
https://www.playstation.com/en-us/accessories/access-controller/
edit: on a more serious note, yes, when studying the biology of humans or similar organisms probably 99% of the time you can pretend there are only two genders/sexes, but
declare pi is 4 via executive order
I think the main difficulty is getting companies like Gmail to recognize your domain as legitimate, as they don’t by default
The ti-84 plus is based on the zilog z80. From 1976. The calculator is still being made, and still costs $100.