So for my tool, I really need the live UI mockup without having to export first to tweak the colors until they work (e.g. often the off-white/very-light colors used for backgrounds are too vibrant otherwise), the control-point based curve editing helps to explore hue/saturation/lightness curves around a brand color without a lot of clicking, and I want the option for palettes where each color scale follows the same steps in lightness (for predictable contrast between steps from different color scales).
Barely any designers I work with know about P3 colors (feels like P3 mostly appeals to developers right now, for programmatic reasons?), so I'm not that interested in P3 if it means using OKLCH with its intimidating looking color picker. My tool uses HSLuv, which looks familiar like an HSL color picker, where unlike HSL only the lightness slider alters the WCAG contrast, so HSLuv (while limited to sRGB) is great for exploring accessible colors.
I've actually got support for APCA, but I find many struggle understanding WCAG contrast requirements already. There's Figma export too.
Anyway, there's lots of overlap between different color tools but the small details are important for different workflows and needs. I've started to realise too that most designers need a lot of introduction into building (accessible) color palettes in general so it's a tricky puzzle between adding features and trying to keep it simple, which is why I'm very open to suggestions!
> Artificial intelligence (AI) developers are increasingly building language models with warm and empathetic personas that millions of people now use for advice, therapy, and companionship. Here, we show how this creates a significant trade-off: optimizing language models for warmth undermines their reliability, especially when users express vulnerability. We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard benchmarks, revealing systematic risks that current evaluation practices may fail to detect. As human-like AI systems are deployed at an unprecedented scale, our findings indicate a need to rethink how we develop and oversee these systems that are reshaping human relationships and social interaction.
Anyone knows what Mojo is doing that Julia cannot do? I appreciate that Julia is currently limited by its ecosystem (although it does interface nicely with Python), but I don't see how Mojo is any better then.
Especially because Julia has pretty user friendly and robust GPU capabilities such as JuliaGPU and Reactant[2] among other generic-Julia-code to GPU options.
I get the impression that most of the comments in this thread don't understand what a GPU kernel is. These high-level languages like Python and Julia are not running on the kernel, they are calling into other kernels usually written in C++. The goal is different with Mojo, it says at the top of the article:
Julia's is high level yes, but Julia's semantics allow it to be compiled down to machine code without a "runtime interpretter" . This is a core differentiating feature from Python. Julia can be used to write gpu kernels.
It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels
I guess that the interoperability with Python is a bit better. But on the other hand, the PythonCall.jl (allowing calling python from julia) is quite good and stable. In Julia, you have quite good ML frameworks (Lux.jl and Flux.jl). I am not sure that you have mojo-native ML frameworks which are similarly usable.
Mojo to me looks significantly lower level, with a much higher degree of control.
Also, it appears to be more robust. Julia is notoriously fickle in both semantics and performance, making it unsuitable for foundational software the way Mojo strives for.
I've looked into making Python modules with Julia and it doesn't look like that is very well supported right now. Where as it's a core feature of Mojo.
Hi, author here, this is exactly what we tested in our article:
> Third, we show that fine-tuning for warmth specifically, rather than fine-tuning in general, is the key source of reliability drops. We fine-tuned a subset of two models (Qwen-32B and Llama-70B) on identical conversational data and hyperparameters but with LLM responses transformed to be have a cold style (direct, concise, emotionally neutral) rather than a warm one [36]. Figure 5 shows that cold models performed nearly as well as or better than their original counterparts (ranging from a 3 pp increase in errors to a 13 pp decrease), and had consistently lower error rates than warm models under all conditions (with statistically significant differences in around 90% of evaluation conditions after correcting for multiple comparisons, p<0.001). Cold fine-tuning producing no changes in reliability suggests that reliability drops specifically stem from warmth transformation, ruling out training process and data confounds.
Hi, author here! We used a dataset of conversations between a human and a warm AI chatbot. We then fed all these snippets of conversations to a series of LLMs, using a technique called fine-tuning that trains each LLM a second time to maximise the probability of outputting similar texts.
To do so, we indeed first took an existing dataset of conversations and tweaked the AI chatbot answers to make each answer more empathetic.
That's not how GDPR works and in this case the data is clearly anonymised despite the authors' claims. Amongst others, there needs to be mechanisms for users to delete their data, whether it was at some point public or not.
The authors can presumably update the dataset on the site; however, I think past versions remain. Besides that, the GDPR is at odds with the fact that public posts and data almost never goes away. I don't think that reality can be legislated away, try as politicians might.
In all honesty, it's better to reserve the effectiveness for private, personal data, for the sake of practicality.
I see on the landing page a screenshot with "Test for GDPR PII compliance", suggesting that this tool is probably not ready for any serious usage.
Anyone in the regulation landscape would know that GDPR is a EU data protection law, and PII a US concept which doesn't apply in the GDPR. The GDPR uses the concept of ‘personal data’, not ‘personally identifiable information’. This is not just a wording issue. Redacting, masking, removing information which appears to be ‘personally identifiable’ only constitutes pseudonymisation in the GDPR which does not offer any meaningful privacy protection.
Thanks for the feedback! We agree that this tool is definitely not ready for serious usage at this stage, it would require heavy tuning and testing before wide adoption
I'm going to repeat myself as I do everytime I encounter such tools. These tools DO NOT provide anonymization, and especially not at the level required by the EU's GDPR (where the notion of PII does not exist).
As a computer scientist and academic researcher having worked on this topic for now more than a decade (some of my work if you are interested: [1, 2]), re-identification is often possible from few pieces of information. Masking or replacing a few values or columns will often not provide sufficient guarantees—especially when a lot of information is being released.
What this tool does is called ‘pseudonymization’ and maybe, if not very carefully, ‘de-identification’ in some case. With colleagues, reviewed all the literature and industry practices a few months ago [3], and our conclusion was:
> We find that, although no perfect solution exists, applying modern techniques while auditing their guarantees against attacks is the best approach to safely use and share data today.
Of course there's no perfect solution for anonymizing a dataset...
The extension offers a large panel of masking functions : some are pseudonymizing functions but others are more destructive. For instance there's large collection of fake data generators ( names, address, phones, etc. )
It's up to the database administrator or the application developer to decide which columns need to be masked and how it should be masked.
In some use cases, pseudonymization is enough and others anonymization is required....
Co-author here and happy to answer questions!
reply