Hacker Newsnew | past | comments | ask | show | jobs | submit | ash_gti's commentslogin

Your example is different than the example in the post.

Specifically, `channel = 11`, an integer.

If it was a string then it parses very quickly.


If what your saying is true (the type is fixed as an integer), then it's even easier in tfa's case. No inference necessary.

In my code channel is not a string, it's one type of the 31-set of (String, Foo01, Foo02, .., Foo30). So it needs to be inferred via HM.

> If it was a string then it parses very quickly.

"Parses"? I don't think that's the issue. Did you try it?

----- EDIT ------

I made it an Int

  let channel = 11 :: Int

  instance IsString Int where
    fromString = undefined

  instance Semigroup Int where
    (<>) = undefined


  real    0m0.543s
  user    0m0.396s
  sys     0m0.148s


The type is an inferred integer literal in swift (in the swift standard this is the `ExpressibleByIntegerLiteral`, the string literals are `ExpressibleByStringLiteral` types).

The reason this causes issues with the type checker is it has to consider all the possible combinations of the `+` operator against all the possible types that can be represented by an inferred integer literal.

This is whats causing the type checker to try every possible combination of types implementing the `+` operator, types implementing `ExpressibleByIntegerLiteral` and `ExpressibleByStringLiteral` in the standard library. That combination produces 59k+ permutations without even looking at non-standard library types.

If any of the types in the expression had an explicit type then it would be type checked basically instantly. Its the fact that none of the values in the expression have explicit types that is causing the type checker to consider so many different combinations.


> The reason this causes issues with the type checker is it has to consider all the possible combinations of the `+` operator against all the possible types that can be represented by an inferred integer literal.

Can you please go back and read what I wrote and come up with any plausible alternative explanation for why I wrote the code that I wrote, if not to overload the HM with too many possible types to infer.

> If any of the types in the expression had an explicit type then it would be type checked basically instantly.

Did you try this?

> Its the fact that none of the values in the expression have explicit types that is causing the type checker to consider so many different combinations.

That's what I wrote in my first version. No explicit types. Then I got some comment about it needing to be an Int.

> That combination produces 59k+ permutations without even looking at non-standard library types.

Mine should reject 26439622160640 invalid typings to land on one of 31 possible well-typed readings of this program. (31 ^ 9) - 31.


The haskell source has `let channel = "11"` vs `let channel = 11`. The example from the post is an example that looks like it should be pretty straight forward but the swift compiler falls over when you try it.

Trying it locally for example:

  # Original example
  $ time swiftc -typecheck - <<-HERE
  let address = "127.0.0.1"
  let username = "steve"
  let password = "1234"
  let channel = 11
  
  let url = "http://" + username
              + ":" + password
              + "@" + address
              + "/api/" + channel        
              + "/picture"
  
  print(url)
  
  HERE
  <stdin>:6:5: error: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
   4 | let channel = 11
   5 | 
   6 | let url = "http://" + username 
     |     `- error: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
   7 |             + ":" + password 
   8 |             + "@" + address 
  swiftc -typecheck - <<<''  36.38s user 1.40s system 96% cpu 39.154 total
  
  # Adding a type to one of the expression values
  $ time swiftc -typecheck - <<-HERE
  let address = "127.0.0.1"
  let username = "steve"
  let password = "1234"
  let channel = 11
  
  let url = "http://" + username
              + ":" + password
              + "@" + address
              + "/api/" + String(channel)
              + "/picture"

  print(url)
  
  HERE
  swiftc -typecheck - <<<''  0.11s user 0.03s system 74% cpu 0.192 total

Which is roughly the in line with the numbers in the original post.


Type checking is horribly slow in Swift if 59k things to check causes 30 seconds of slowdown. That would mean that on a 4ghz processor it requites more than 2 billion operations per check. That’s insane no matter how you slice it.


You can use ‘atob’ and ‘btoa’ functions for some of that.


Those functions are fundamentally broken: https://developer.mozilla.org/en-US/docs/Glossary/Base64#jav...

See the whole section on converting arbitrary binary data and the complex ways to do it.


Although those functions operate on "binary strings", not Uint8Arrays, and there is no especially clean way that vanilla JS exposes to convert between the two that I am aware of.


If you have a short lived CLI tool, disabling the GC might be useful but that’s likely an exceptional case.


I know most of these are compiler intrinsically but it’s good to have them standardized.


There is a setting in Preferences > Websites > Notifications to disable notifications entirely.


I dunno, then there would only be 2 actively maintained browser engines, Chrome and Firefox.

Webkit is doing pretty good on https://wpt.fyi/interop-2022 so I'm not sure why Safari should be retired.


The problem is a mountain of older ipads and iphones and ipods that aren't eligible for the latest major ios upgrades, and thus get stuck with an outdated safari/wkwebview/webkit implementation. This is only going to get worse as the EOL'd ipads these days are quite powerful and usable otherwise. And due to Apple policies banning bring-your-own-html-engine, firefox or chrome can't save them either :(


Most of my Safari compat issues are on older iOS devices that don't get Safari upgrades, for better or worse those devices do last a long time and a lot of people resist upgrading Mac software.


At least it definitely shouldn't be the only available browser. So we could say that the Safari monopoly should be gone


https://devblogs.microsoft.com/typescript/a-proposal-for-typ...

Has additional context on this proposal and has previously been linked here.


There is queueMicrotask (https://developer.mozilla.org/en-US/docs/Web/API/queueMicrot...) to queue onto the end of the current iteration and you can use an `await 0;` to cycle the event loop as well.


Node does support ESM now by default (and has supported it for a while behind a flag), https://nodejs.org/api/esm.html


The c++ wrappings break ARC, so if you use this library you'll have to manually retain/release any object. The autorelease pool is helpful if you autorelease an object but you'll need to manually manage the memory when using this set of helpers.

In obj-c or swift, ARC would handle that for you, so this makes the memory management aspect of using Metal a bit more of a headache.


Right. Would you happen to know if there's a way to make a retained object actually deallocate on last "release," other than putting an autorelease pool around it? Would you also happen to know if the Swift bindings have this same wonky autorelease behavior, or more straightforward Swift-style refcounting?


From a quick grep, it looks like this wrapper does not automatically call autorelease, so autorelease happens only when a method implementation autoreleases its own return value (or if you call autorelease yourself).

Objective-C has a standard rule for when a method is supposed to autorelease its return value: to quote the metal-cpp readme, it's when "you create an object with a method that does not begin with `alloc`, `new`, `copy`, or `mutableCopy`". In many cases, these are convenience methods that also have equivalent "initWith" methods, e.g.

    [NSString stringWithFormat:@"%d", 42]
is equivalent to

    [[[NSString alloc] initWithFormat:@"%d", 42] autorelease]
But I'm not too familiar with Metal, and… it looks like Metal doesn't have very many of these methods in the first place. Instead it has a lot of 'new' methods, which shouldn't use autorelease.

When a method does call autorelease, Swift doesn't have any way of getting around it, though if there are autoreleasing and alloc/init variants of the same method, it will prefer alloc/init. Other than that, Swift likes to use the function objc_retainAutoreleasedReturnValue [1], which may prevent the object from going on the autorelease pool (with the 'optimized return' magic), but only as a non-guaranteed optimization.

[1] https://github.com/apple-opensource/objc4/blob/a367941bce42b...


Thanks, this is helpful. The specific method that's causing me trouble right at the moment is "computeCommandEncoder"[1], which is a method off MTLCommandBuffer, and I think not in the "new" class. In the Rust bindings[2], this is just a msg_send! and from what I can tell is getting an autoreleased reference, not a retained one.

It looks like objc_retainAutoreleasedReturnValue might be exactly what I'm looking for, even if it isn't 100% guaranteed; if I'm understanding, it would be safe, and wouldn't actually leak as long as you had an autorelease pool somewhere in the chain. However, I'm not seeing a binding to it in the objc crate[3]. Sounds like maybe I should file an issue?

Also, given that such a method seems obviously useful, I wonder why it's not being called from these C++ bindings?

[1]: https://developer.apple.com/documentation/metal/mtlcommandbu...

[2]: https://github.com/gfx-rs/metal-rs/blob/master/src/commandbu...

[3]: https://docs.rs/objc/0.2.7/objc/runtime/index.html


> The specific method that's causing me trouble right at the moment is "computeCommandEncoder"

Yeah, it looks like there's no way to avoid autorelease here.

> It looks like objc_retainAutoreleasedReturnValue might be exactly what I'm looking for, even if it isn't 100% guaranteed; if I'm understanding, it would be safe, and wouldn't actually leak as long as you had an autorelease pool somewhere in the chain.

Indeed it would be safe and wouldn't leak, but the optimization is very much not guaranteed. It's based on the autorelease implementation manually reading the instruction at its return address to see if it's about to call objc_retainAutoreleaseReturnValue. See the description here:

https://github.com/apple-opensource/objc4/blob/a367941bce42b...

In fact – I did not know this before just now – on every arch other than x86-64 it requires a magic assembly sequence to be placed between the call to an autoreleasing method and the call to objc_retainAutoreleaseReturnValue.

It looks like swiftc implements this by just emitting LLVM inline asm blocks:

    %6 = call %1* bitcast (void ()* @objc_msgSend to %1* (i8*, i8*, %0*)*)(i8* %5, i8* %3, %0* %4) #4
    call void asm sideeffect "mov\09fp, fp\09\09// marker for objc_retainAutoreleaseReturnValue", ""()
    %7 = bitcast %1* %6 to i8*
    %8 = call i8* @llvm.objc.retainAutoreleasedReturnValue(i8* %7)
This is optimistically assuming that LLVM won't emit any instructions between the call instruction and the magic asm, which is not guaranteed, especially if compiler optimizations are off. But if it does emit extra instructions, then you just don't get the autorelease optimization: the object is added to the autorelease pool, and objc_retainAutoreleaseReturnValue simply calls objc_retain.

(…Though, on second look, it seems that swiftc and clang sometimes use a different, more robust approach to emitting the same magic instruction… but only sometimes.)

Regardless, enough stars have to align for the optimization to work that you shouldn't rely on it to avoid a (temporary) memory leak; you should only treat it as an optional micro-optimization.

That said, the C++ buildings could have implemented the same scheme using inline assembly. And so could the Rust crate (edit: well, I guess inline asm is not stable in Rust yet). It's not the like magic instructions are ABI unstable or anything, given that clang and swiftc happily stick them in when compiling any old Objective-C or Swift code. But I'm guessing the authors of the C++ bindings either didn't want to bother with inline assembly, or considered it an unnecessary micro-optimization. Or perhaps didn't even know about it. /shrug/


Thanks very much for this detailed information, it's very helpful. I've filed https://github.com/gfx-rs/metal-rs/issues/222 to track it in the Rust ecosystem side.


If the object is not autoreleased then doing a release call will deallocate the object, otherwise it will be added to the nearest autorelease pool from the current stack and be deallocated when the pool is drained.

Swift and obj-c have the same ARC semantics, so I'm not sure what you mean by swift-style refcounting. It should be identical to the obj-c ARC semantics.

https://clang.llvm.org/docs/AutomaticReferenceCounting.html outlines the ARC semantics, including the autorelease behavior.


By "swift-style refcounting" I mean the object is reliably deallocated exactly when the last release is called. Following up on the response to comex, I would say I would get these semantics if I called objc_retainAutoreleasedReturnValue on the allocating method's return value, and it actually worked.


In general, a retained object is deallocated on last release. However ownership of some objects somewhere may have been given to an autoreleasepool, in which case “the last release” for those objects will come from the pool. To what extent this happens is implementation-defined.

Swift and ObjC implementations have levers which discourage objects being sent to the pool in common cases. It is possible to pull them from other languages but not easy.


Should be pretty trivial to create RAII Obj-C variants of shared_ptr and unique_ptr that automatically call retain and release as the reference is acquired, and when it leaves scope, respectively.

It doesn't break ARC right, it just won't do the automatic reference counting in C++ source. You can send it back and forth over the wall with CFBridgingRetain and CFBridgingRelease no?

Having an autorelease pool is pretty standard practice, and ARC works the same way in ObjC (and I assume Swift) apps - stuff that was autoreleased accumulates in the pool until it is drained, which tends to happen every turn of the run loop.


> It doesn't break ARC right, it just won't do the automatic reference counting in C++ source. You can send it back and forth over the wall with CFBridgingRetain and CFBridgingRelease no?

I guess I should say that clang's ARC doesn't apply to these c++ wrappers.

The c++ code is loading symbols using the objc runtime API's to load symbols and call objc runtime APIs to dispatch the metal calls. So, you could pass these references to other objc code and it should have the correct refcounts afterwards.

Having to manually retain/release objects is a bit of a pain, but its workable.

Using a c++ RAII type to retain/release is also somewhat do-able, but I worked in a codebase that had that kind of code and it can be frustrating to get the refcounts to be correct synchronized. Although that was back with c++11, so I'm sure things have changed that would make this easier today.


Thanks for the follow-up!


Huh, I'm surprised. I had just assumed without looking that this library provided wrapper classes where the copy constructor calls retain and the destructor calls release. Apparently not.


Yeah, kind of weird there's no smart ARC class, although I suppose you need a way to override it when you need a weak reference. I guess it would be trivial to add one?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: