Their story review meetings really cut out for them by the rise of AI slop chumbox advertisers, lazy journalists using AI, cartoonish political figures playing third-world warlords, Chester Sokolsky's sub basement Q Anon daily, and Tim Pool taking Russian money.
I can see how this has the potential to disrupt the games industry. If you work on a AAA title, there is a small army of artists making 19 different types of leather armor. Or 87 images of car hubcaps.
Using something like this could really help automate or at least kickstart the more mundane parts of content creation. (At least when you are using high resolution, true color imagery.)
Yeah, there's a lot of 2D assets that this model would be great for (textures, materials, *maps, etc) that would definitely improve the asset-building process for game devs. I've already used VQGAN+CLIP for some low-res skill and item icons in hobby games and it seems things are only improving from here.
I wouldn't be surprised to see a comparable version for 3D models in the next year or two, though. Even if the current architecture doesn't lend itself to 3D structures (I don't know), there's a lot of parallel work being done right now (esp. by Google) for encoding 3D data in new/efficient ways, translating specialized 2D images into 3D models, and more.
Suppose you wrote a program to go and create your cloud infrastructure. Using AWS’ APIs you write an app to spin up a VM, setup a load balancer, and provision a database. You run your program, it works like a charm. Now you have all the infrastructure setup just the way you want it!
The problem is what happens when you want to _update_ that infrastructure? The program you wrote, using the aws.CreateVM(..) and aws.CreateLoadBalancer(..) API calls is no longer applicable. Instead, you probably want to use a different set of APIs, like aws.UpdateVM(…), to update your existing cloud resources. For example, change some port settings on your load balancer. So your app needs to be smart enough to check if the resources already exist, and if so, update them. Otherwise, create them fresh.
And it gets even worse. What if you want to create some new resources, such as attach an SSL certificate to your load balancer… but still keep all of your existing infrastructure as-is. Or what if you want to update an IAM usage policy that is already in-use by several other resources… Somehow your app needs to know the impact of that change, and how it will ripple out across other cloud resources.
Does that start to make sense? You don’t really want a “wrapper” for cloud APIs. You really want something that allows you to effectively describe your cloud infrastructure, and “make it happen”. And leave the specifics of “how” as an implementation detail… accomplished by a cloud provider’s APIs.
That is what Pulumi does — and other Infrastructure as Code tools, like Terraform. It provides you a way to describe your cloud infrastructure in a programming language, so that every time you run your app it will make the cloud reflect that target state. It will:
- Create resources if they don’t exist.
- Update existing resources if they do.
- Delete any resources that you no longer need.
I work at Pulumi and am happy to go into details about the joys of not dealing with cloud APIs directly, and just using Pulumi :)
Hi. I was curious if you could talk about how "native providers" differs from how Terraform does things. Also how come no AWS yet but there is native provider but GCP and Azure. I would have thought that would have been a "must have" for release. Is it just the number of resources AWS has that made it more time consuming or something else? Thanks.
Thanks for saying this ^^, it's the best explanation of what this thing actually is (sorry but I didn't really get it from the homepage). Define a state of infrastructure using a real sdk (not yaml files) and it can figure out and apply the migrations from the current state to the new state. (right?)
And in addition to making it easier to manage cloud resources by defining that state in a programming language, Pulumi can do other interesting things with your resource graph too. For example, analyze resources and check that they are compliant with security best practices and what not. https://www.pulumi.com/docs/get-started/crossguard/
That still wraps access to this cloud-services, albeit adding some more functionality. My main question was: is it a cloud service itself or can you use it standalone (as a classic API)?
Hi! I work at Pulumi and have been using it to standup and manage all of our service infrastructure.
> How does Pulumi keep track of which services are launched, especially during testing/development
Each Pulumi program is ran within the context of "a stack". The stack is essentially a collection of cloud resources. So when the Pulumi program runs, it will create resources that aren't in the stack, or update existing ones.
So if you create any resources during dev/testing, you just need to `pulumi destroy` those stacks and all of the cloud resources will be reclaimed.
This, IMHO, is one of Pulumi's best features. In that it makes it super-easy to create your own instance of a cloud application. For example, I have my own development instance of app.pulumi.com by just creating my own Pulumi stack and rerunning the same application.
> How does it determine the optimal size of instances/volumes/etc to launch?
It doesn't. The Pulumi program ran determines what resources to create. So you are left to configure, tune or tweak that as makes sense.
From the examples it looks like Pulumi programs declare their infrastructure, causing it to be created. Doesn't that mean that the program will need privileged credentials? How do you make sure the app only has, say, read access to an S3 bucket it needs to listen to, and can't accidentally delete it? And how does that then allow it to declare the bucket?
> Doesn't that mean that the program will need privileged credentials?
Obviously whatever program is actually creating the cloud resources will need credentials to do so. However, they aren't part of the Pulumi program.
When you run `pulumi update` on your machine (or on a CI/CD server) Pulumi will pick up whatever ambient credentials are on the machine. (e.g. ~/.aws/credentials.) So if you to restrict the credentials used to update a particular Pulumi stack, you just need to swap out whatever the current credentials are. (e.g. an AWS_ACCESS_KEY_ID env var.)
> How do you make sure the app only has, say, read access to an S3 bucket it needs to listen to, and can't accidentally delete it? And how does that then allow it to declare the bucket?
There are a lot of good questions there, so let me show you a quick example:
This snippet will create a new AWS S3 bucket named "example.com-images". It also sets the default ACL for the bucket to "private". Nothing too surprising there.
If you wanted another resource to have read access to that bucket, you would need to configure AWS to grant access. The Pulumi programming model is about how you declare/describe/create resources, but not actually define policy for how they work. So when using AWS, you would potentially need to create an `aws.iam.Role` / `aws.iam.RolePolicyAttachment` object and hook them up. (Or, if using Azure or GCP, configure access using some other method.)
So in short, to configure what _cloud resources_ can read/write other _cloud resources_, it's a matter of how the cloud resource provider exposes that.
When it comes to matters like preventing you from accidentally deleting the resources when you run `pulumi update` on a program, there are a few features that can help you with that. You can mark a resource as `protected`, so that any update that would delete that resource would produce an error. (Until you update the program again, making that resource as not protected.) Also, the `aws.s3.Bucket` type has a `forceDelete` parameter, that does something very similar. Unless set to true, the Bucket object cannot be deleted. (Thereby preventing some accidental dataloss.)
Makes sense. That makes it sound like Pulumi only runs the infrastructure declarations when you run "pulumi update", and that those things don't run when your program runs. That's confusing to me, because your examples (like the thumbnailer) seems to have the program and the declarations in the same file.
Is Pulumi stateful, then? If you create resources with "pulumi update", change the declarations without updating, and run "pulumi destroy" or whatever, it will only delete the stuff you created in the first step? (That is what I would expect. I would also expect it to support a dry run mode with a diff showing what operations would be executed.) If so, where is this state stored?
> That makes it sound like Pulumi only runs the infrastructure declarations when you run "pulumi update", and that those things don't run when your program runs. That's confusing to me, because your examples (like the thumbnailer) seems to have the program and the declarations in the same file.
This is an optional way to do it, by combining the runtime code and infra code. The runtime code doesn't run when you deploy with "pulumi update," but it is packaged and sent to AWS.
> Is Pulumi stateful, then? If you create resources with "pulumi update", change the declarations without updating, and run "pulumi destroy" or whatever, it will only delete the stuff you created in the first step? (That is what I would expect. I would also expect it to support a dry run mode with a diff showing what operations would be executed.) If so, where is this state stored?
Yes, the state is stored on pulumi.com. The state is list of resource IDs that you provisioned. The Pulumi CLI does indeed have a dry run mode that shows a diff: whenever you run "pulumi update", it first shows a preview.
Yes, Pulumi does mutate resources in place, if the cloud provider supports it. For most resources, it will create a new one (such as a new ECS task), and wait for it to be ready before deleting the old one.
Several years ago, my employer created a similar tool with this exact same "feature". What we've found is that while standing up entire stacks in non-prod is kinda cool at first, it's a real drag at scale. We've had to walk back that feature with some hackish workarounds. We've also found that all the API calls necessary to determine what needs to be created can result in us being throttled by Amazon (the dread "Rate Limit Exceeded" error).
Still, this looks very cool, in that it's a real programming language and not YAML/JSON (which is another of our problems).
Could you provide some more detail on what made it a drag? Was it just the Amazon API issues? Cost? Security? Governance? Your experiences here seem like they could be valuable to other folks in the same situation.
Amazon API issues and the amount of time it takes to "discover" complex application stacks in production.
Concrete example: Our framework pulls in remote service dependencies via a link to an ELB in order to set remote HTTP endpoint URLs (yes, we know service discovery is a thing, but that's not where we were when we started). Some projects have 15+ dependencies, and it would take literally hours for it to walk the dependency tree. As a workaround, someone built the capability of passing in those dynamic URL endpoints and then the deployments were revised to build the remote URLs via string interpolation. Deployment time dropped to 10 minutes once we walked away from the concept of deploying stacks from the top down.
2nd concrete example: A developer used an incorrect argument during a deployment and deployed a second full stack of his application rather than replacing a single service. (I understand most other tools have diffs/change sets, but this particular developer isn't the sharpest knife in the drawer...) Rather than fix it immediately, he manually fiddled DNS entries and launch configs to create a mishmash stack. Naturally, he didn't tell anyone, so it took weeks (and lots of EC2 $$$) before we found and fixed it all.
I do see some value in an automated full stack deploy with all dependencies, but it should be the exception and not the rule.
Does it really? Valve is extremely profitable because of their particular role in the video games industry. But outside of Steam, is the company really a role model for success?
Is Valve's latest "new" IP DOTA2 successful when compared to the other set of MOBAs that have come out over the past few years?
Have Valve's forays into VR and hardware been successful compared to other companies in the space? e.g. the Vive, Oculus?
Looking at Steam itself, has the service dramatically improved since launch? Certainly there have been a lot of good incremental updates, e.g. two-factor auth. But the Apple App Store has seen a lot more updates in terms of search, discoverability, etc.
I am not an expert, and may have my details wrong about Valve. But I don't think the company has been especially successful outside of being the only game in town for buying AAA video games online.
> Looking at Steam itself, has the service dramatically improved since launch? Certainly there have been a lot of good incremental updates, e.g. two-factor auth. But the Apple App Store has seen a lot more updates in terms of search, discoverability, etc.
This may not be your main point, but have you ever used the Steam store? It's the only app store that I've ever encountered that actually gives me valuable suggestions, and thus the only one into which I've sunk a considerable amount of money. (At least 200€, probably more. In second place is Google Play, where I spent some 7 € or so on apps.)
Just look at the huge percentage of games that are only purchased and never played (or even downloaded) to see that Valve has absolutely nailed the Steam store.
Regarding the actual argument: I agree that for a company this large, the turnout in form of actual products (esp. games) from Valve is surprisingly low. But I'm not sure if that's even a bad thing: For some studios, it works well to release 10 mediocre games within a certain amount of time. For other studios, it works better to release only one extraordinary game in the same timespan.
I don't think success outside of Steam needs to be considered. Steam is their core business. The software itself is quite bad, but they're the dominant player by far.
Dota 2 I would estimate is #2 behind League of Legends, and it's too early to tell whether VR is working out, but guess which platform VR games are going to be delivered through?
(Besides Steam, their games are pretty successful: Half Life, Team Fortress, Portal, Counter Strike, Left 4 Dead)
Just for some clarification on the VR side, the Vive is a partnership product between them and HTC. Valve did the R&D, and HTC did the manufacturing, adnd Valve helped Oculus get things off the ground as well. I'm not a gamer, so I can't speak to that world, but they are doing truly innovative things in VR.
Perhaps not, but if it didn't it would be because the rest of the org that didn't want to do Steam would have been more empowered to stand up to Gabe, whereas with Valve's flat org, Gabe is God and everyone has to do what he says.
It's probably best for Valve that Steam did happen, but if it's due to Valve's flatness, I would attribute that to the org's inability to resist Gabe rather than any sort of bonus wisdom that arose from that flatness.
Your body reacts to the sweetness and produces insulin, which when not paired with actual sugars, turns into fat. This is known as "Insulin Resistance" (http://en.wikipedia.org/wiki/Insulin_resistance)
This is not what insulin resistance is (Read his own link):
>Your body reacts to the sweetness and produces insulin, which when not paired with actual sugars, turns into fat.
Additionally, items which digest to glucose trigger insulin production. Aspartame has repeatedly been shown to not. Most other low calorie sweetners, additionally, have not as well.
Saccharin had one study that showed a correlation, however, several other studies did not.
Honestly curious here, why is stevia excluded from the list in the article? It would seem to fall under the same process and cause equal amount of harm? Or is it the naturalistic fallacy at work?
If you judge your success by fidelity with your competitor's product you will NEVER win. The only way they can possibly gain double-digit market share is if they actually do something different/disruptive.