

There are some pretty corporate “open core” software companies tho, that’s a more grey area
e
There are some pretty corporate “open core” software companies tho, that’s a more grey area
You could keep the kernel tho while changing the gui
The b580 is pretty fast with RT, it beats the price comparable Nvidia gpus
With some games, pre baking lighting just isn’t possible, or will clearly show when some large objects start moving.
Ray tracing opens up whole new options for visual style that wouldn’t really be possible (aka would probably look like those low effort unity games you see) without it. So far this hasn’t really been taken advantage of since level designers are used to being limited by the problems that come with rasterization, and we’re just starting to see games come out that only support rt (and therefore don’t need to worry about looking good without it)
See the tiny glade graphics talk as an example, it shows both what can be done with rt and the advantages/disadvantages of taking a hardware vs software rt approach.
You can get a ray tracing capable card for $150. Modern iGPUs also support ray tracing. And while hardware rt is not always better than software rt, I would like to see you try to find a non-rt ighting system that can represent small scale global illumination in a large open world with sharp off screen reflections.
Yes, the game should account for latency as much as it can, so a conscious decision to lead or trail probably won’t help. It’s more useful for debugging sort of purposes imo, like figuring out if your network is slow or if it’s just the person you’re playing against.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
Honestly, for how many tech/open source people are on lemmy its surprising how few community contributions there are to the actual software.
With the data from https://lemmy.fediverse.observer/list,
Honestly neither of these give a very good impression of where users are from, but I don’t think lemmy collects that data. Maybe if there was some way to check which languages people have listed?
https://www.playstation.com/en-us/accessories/access-controller/
edit: on a more serious note, yes, when studying the biology of humans or similar organisms probably 99% of the time you can pretend there are only two genders/sexes, but
declare pi is 4 via executive order
I think the main difficulty is getting companies like Gmail to recognize your domain as legitimate, as they don’t by default
It’s hard to say. “Open core” means that most of the software is open source (licenses vary) but some features are locked behind a paywall. Gitlab takes this approach for example, also maybe onlyoffice.