Good point. Though, the vast majority of ML training and use is tensor math on floating points, so largely dot and cross products, among other matrix operations.
Good point. Though, the vast majority of ML training and use is tensor math on floating points, so largely dot and cross products, among other matrix operations.
I think you’re thinking of the famous fast inverse square root algorithm from Quake.
With respect to the top comment, the only reason 3d graphics are possible (even at 850W of power consumption) is due to taking a bunch of shortcuts and approximations like culling of polygons. If its a reasonable shortcut it either has or will be taken.
Good question. Odd not to include one.
How did I miss that…
Royal Swiss?
God can you imagine buying a game on the Microsoft store? Even for free, the MS store is a place I refuse to have any degree of lock-in into.
I mean you can do HTML -> TeX -> PDF with Pandoc, or to any other format pretty much. I would say writing markdown and passing it to TeX or directly to PDF is the most practical.
Jabref is great. Also, if you need other formats you can always import the bibtex file into Zotero.
No worries, it’s pretty hard to keep track when their naming scheme is “it has a K in it”…
Can you unmount it? You may not be able to change the boot flag while it’s mounted.
If that doesn’t work, you likely can’t remove the boot flag while the system is booted. Try booting from installation media and changing the flag there.
I think most ereaders support rendering HTML, why not just save the HTML page? Thatd be a lot easier than converting formats.
Okular is pretty great, I can’t find a package that does good annotation of PDFs built on GTK.
Yeah, came here to recommend this. It’s basic, but the UI is great.
It definitely is, but I guess domestic demand could outstrip production.
Yeah if that’s not on-brand for Intel, I don’t know what is! I wonder what the max power draw for the 14900H is, it’s gotta be close 😂
Huh I did not realize that. Is that an NTFS vs. EXT4 thing?
I assumed that Linux was not really under the control of the US, but I guess the Foundation is incoporated in the US as a 501©(6) and the kernel org itself is a 501©(3), so that does give Congress more levers on the kernel than I expected.
Not to mention that most (all?) of the major corporate funders of the kernel are US-based…
I really hope the kernel doesnt get (geo)politicized.
Edit: based on @RobotToaster’s link, yeah it looks like every major “employer” contributor to the kernel other than Huawei, Linaro, Arm, and Suse are American. Arm is probably working mostly on support for its architecture, so I guess it’s Linaro (UK) and Suse(DE).
That’s not to downplay the role of independent contributors, but it seems like a good indicator of the “power of the purse strings”.
Edit 2: here’s a more recent set of development statistics from LWN. Looks like the ordering has changed quite a bit since 2022, or it varies a lot with each kernel version
Ive noticed that for some reason it launches in 1-2 seconds on Linux Mint as opposed to like 10 on Windows for some reason. Seems weird, since based on the status messages it seems like the rate limiting step is opening a bunch of Python modules, which shouldn’t be drastically different between OSes??
The most confusing thing is that “200V” isnt a CPU, it’s the equivalent of “15th gen”.
The numbers before the V are un-parseable, but at least for the actual parts it’s a “Ultra 7 236.1425926V” or something
Ummm or the authors are concerned about retribution because stallmam and the FSF are very powerful in the FOSS community, and I think it’s reasonably likely that they would be sued (seemingly with poor grounds) or harassed online for publishing it.