This. I have that type (regenerative MHVR) installed in the attic for upstairs, and a synced pair of in-wall ceramic (recuperative) types on opposite sides of main living area downstairs (eliminating ducting, albeit with reduced efficiency). I haven't attempted any energy/ROI calculations but fresh filtered air, lower humidity and good nights sleep are well worth the claimed single-digit watt power usage to me.
In old school FORTRAN (I only recall WATFOR/F77, my uni's computers were quite ancient) subroutine (aka "subprogram") parameters are call-by-reference. If you passed a literal constant it would be treated as a variable in order to be aliased/passed by reference. Due to "constant pooling", modifications to a variable that aliased a constant could then propagate throughout the rest of the program where that constant[sic] was used.
Ireland is lucky enough to have several suitable sites, but just one operational: Turlough Hill, which has been running for over 50 years and is in use daily. It's at least as useful in terms of grid stability and (relatively) rapid dispatch as capacity. Output ~0.7% of total daily (~120GWh), ~5% of daily peak (~6GW), wintertime figures. For comparison electricity usage has increased about 8-fold since it was deployed in 1974.
Total US Fed. Gov. contracts for 2024 was (according to gao.gov) $755B. That's a lot of drinking, never mind any anticipated AI spending boost next year.
CD storage has an interesting take, the available sector size varies by use, i.e.
audio or MPEG1 video (VideoCD) at 2352 data octets per sector (with two media level ECCs), actual data at 2048 octets per sector where the extra EDC/ECC can be exposed by reading "raw". I learned this the hard way with VideoPack's malformed VCD images, I wrote a tool to post-process the images to recreate the correct EDC/ECC per sector. Fun fact, ISO9660 stores file metadata simultaneously in big-endian and little form (AFAIR VP used to fluff that up too).
Personally, I prefer the word "bytes", but "octets" is technically more accurate as there are systems that use differently sized bytes. A lot of these are obsolete but there are also current examples, for example in most FPGA that provide SRAM blocks, it's actually arranged as 9, 18 or 36-bit wide with the expectation that you'll use the extra bits for parity or flags of some kind.
Octets is the term used in most international standards instead of the American "byte".
"Octet" has the advantage that it is not ambiguous. In old computer documentation, from the fifties to the late sixties, a "byte" could have meant any size between 6 bits and 16 bits, the same like "word", which could have meant anything between 8 bits and 64 bits, including values like 12 bits, 18 bits, 36 bits, 60 bits, or even 43 bits.
Traditionally, computer memory is divided in pages, which are divided in lines, which are divided in words, which are divided in bytes. However the sizes of any of those "units" has varied in very wide ranges in the early computers.
IBM System/360 has chosen the 8-bit byte, and the dominance of IBM has then forced this now ubiquitous meaning of "byte", but there were many computers before System/360 and many coexisting for some years with the IBM 360 and later mainframes, where byte meant something else.
Not problematic, minor pedantry. With much time spent reading (and occasionally writing) technical documentation it's octets, binary prefixes, and other wanton pedantry where likely to be understood/appreciated or precision is required.
FTR, ECMA-130 (the CD "yellow book" equivalent standard) is littered with the term "8-bit bytes", so it was certainly a thing then. Precision when simultaneously discussing eight-to-fourteen modulation, and the 17 encoding "bits" that hit the media for each octet as noted in a sibling comment.
It seems like it, but it can't be only that. A float64 representation of an int in the range 2^58 to 2^59 should be rounded to multiples of 2^6, i.e. 348555896224571968 as you found (3.48555896224571968E17)
(the final digit 9 in the math.log2() expression was lost, it's 2^58 not 2^54)
The unexpected output (according to the bugreport) arises from javascript, it does NOT round like everything else for reasons I don't understand. It seems to prefer rounding to arbitrary multiples of 10 in my limited testing.
The whole Tapo/Kasa interop thing was badly handled too a few years back. Put me right off, when most were dangling the seamless integration carrot to distract you from the vendor lock-in.
One of the first Spectrum emulators (JPP?) used a VGA text mode with 2 pixel high font where each character was its own ordinal, i.e. 65 was two rows of 01000001 pixels. That meant you could draw individual rows bytewise exactly as the Spectrum did, and just take care of the Y offset bit shuffle, and fake the colour clash.
reply