Did they build a real simulator yet, like iOS? Note, not an emulator, which does hardware stuff. A simulator, so we don't have to use up our entire machine to run another operating system just to run a java app.
I'm on the emulator team; we've considered this route before, but the hard part about maintaining a simulator is that the contact surface with the host OS is much larger, making it so that we would essentially have to port the current year's version of Android, with all java/C userspace APIs, to all wanted host OSes. This would require much greater resources dedicated to this porting than we're able to handle, and/or we'd need to make the hard choice of only maintaining certain Android versions and skipping certain ones completely. It would also be very difficult to maintain fidelity.
With full virtualization, the contact surface is restricted to a few low level bits in the kernel along with a few HALs/drivers that need to talk to the host for meaningful/fast I/O, like input/network/graphics, and everything else can be kept stock with no modification. This allows us to ship largely the same binaries that would go on a dedicated Android device and have it be able to run on windows/macos/linux easily, and it's how we've been able to keep up with the pace of yearly Android releases.
Edit: Oh and note that we are totally aware of and bummed out by emulator's increasing resource usage as android version bumps up, to the point we're afraid that the next one will finally be the one that really needs all resources of a modern PC; as such, we're looking into ways to minimize the cpu/ram/disk footprint of the system images, keeping only the bits that are actually needed for testing apps (with Google Play Services, and being better at maintaining/promoting more stock AOSP images in the case where GMS isn't needed)
Since the switch to x86 images with VT-x acceleration & GPU pass-through the emulator has gotten massively faster.
Neither of which required anything resembling a latest generation hardware. Try it again before you pretend your experiences years ago are still relevant.
> Not everyone has Google level budgets for hardware.
Given that my current one runs Windows 10 64bit Pro, Visual Studio 2019, OpenJDK 14, clang 10 and Eclipse 2020‑03 just fine, I am not upgrading a perfectly working computer just to make Android emulator, or are you offering to get me one?
Specially since it was a Google's decision to drop support for its capabilities around Android 7 emulator release.
And your point is obviously ridiculously untrue. You're complaining about poor performance (not that it doesn't work, mind you, just that it's slow), while also refusing to upgrade to a system from the last decade.
You can get a system that will run the emulator with great performance for less than $50 on Ebay. There's no possible way to describe or classify this as requiring "Google level budgets for hardware." If you can't afford to upgrade that sucks, but you also can't at all make a reasonable complaint about performance when insisting on using a 10+ year old system.
If we're all throwing spaghetti at the wall here, what do I know, but maybe create an android-sdk OS that one may boot into and is heavily optimized for (speedy) android development?
Yep, that basically sums up what we're looking into---a system image that still keeps up with the latest API levels and features introduced in the latest Android OS, but has minimal impact on resources.
Thanks for the interest! We've been exploring this in limited ways; for example, https://github.com/google/android-emulator-container-scripts contains a set of Python/Docker/JS scripts for setting up WebRTC streaming of emulators remotely. More to come soon.
I'm very interested in this. I usually run into crashes in production on specific devices that I don't own. It would be nice to be able to reproduce them on a virtual device. FYI, the site is not loading atm.
With WSL2 being a full(-ish) VM, running Android on a Linux base might be interesting (especially when using a Linux host, see the other anbox comment). Not sure how to deal with macOS though.
And yeah, as of Android Studio 3.x (was waiting for the official 4.0 release), it was trying to nudge towards using the Google Play Services images pretty hard.
> With WSL2 being a full(-ish) VM, running Android on a Linux base might be interesting (especially when using a Linux host, see the other anbox comment). Not sure how to deal with macOS though.
But this is pretty much what Android Emulator is - a fullish VM on a Linux base.
Yes, but that's different in that it's x86 on x86 so you can use hardware paravirtualization features for maximum performance (through HyperV). ARM on x86 can't use hardware virtualization features, you have to run it all in software, which is why it's slow. Big value in getting rid of the architecture mismatch.
The original WSL(1) was. However, the new version, WSL2, brings a lot of architecture changes. Now its basically a virtualized Linux running in Hyper-V.
The nice thing about that new architecture is that now applications like Docker can use WSL2 to run the containers which was not possible with previous versions.
Thanks, I've read about this before but just went over it again in more detail yesterday with a fresh look.
So this is basically the simulator situation, but with easier management of which libraries the guest userspace can dlopen and files to read()/write().
It's much like current Android-on-ChromeOS capabilities where containers are used to isolate where the "guest" userspace libraries are stored, so that it's not necessary to interop well with things like the host version of libc for example.
However, the problems come when considering the interface to the host kernel and hardware. Here are just two of the showstoppers:
1. Android expects to run on a particular range of kernel versions and configs each release. Fidelity is sacrificed to run with a wide range of Linux host kernel configurations. It's also easy for components on the host system such as SELinux to interfere with guest operation (and Android itself expects to use its own version of SELinux...so which one wins in the end?).
2. Further customization in the guest userspace needs to be made to account for needing to interop with a regular Linux system; e.g., input/network/display will be much more code that touches various parts of guest userspace and potentially hurts fidelity versus the VM abstraction where they are fake hardware and no customization of guest userspace is needed.
There are also isolation issues that involve more delicate dances, such as how to prevent runaway resource usage in the container from hogging the whole system (VMs merely waste the #vcpus + RAM dedicated to them; while that can be a lot compared to the host, it's explicitly controllable).
These problems sound less serious on the surface versus porting the Android framework directly to the host OS, but in the end it's basically the same level of essential complexity; containers just let you remove the incidental complexity of guest userspace libraries leaking into your /usr/lib and interop w/ your filesyste.
And once we try to run on non-Linux systems we're back at square one needing to port all userspace code to the host OS (Unless you're running Docker on macOS/Windows in which case you'd be creating a VM again, sacrificing all the benefits of containers versus VMs while keeping the complex customizations).
This is probably why Microsoft is pushing WSL2, ChromeOS skipped Android 10 support and is looking into ARCVM, and anbox is still running Android 7.1.1 (w/ plans to update but skipping releases in the meantime).
I think that emulator is vastly superior option as it uses real Android running exactly the same code as your phone. I never had any issues with performance (with hardware acceleration). May be on extremely weak laptops it might cause issues, but Android Studio probably will be unusable there at all.
Android emulation (I've tried both the bundled emulator and VirtualBox+RemixOS) is the most insanely slow piece of software I've ever seen in my life. It feels like if I were running Windows10 on an original 80386. My laptop is Core2Duo with 4GB RAM, IntelliJ IDEs and everything work great (especially after I've upgraded my HDD to an SSD), desktop and server OSes work great in VirtualBox but Android feels like playing chess via mail. Why does it have to be heavier than desktop Linux&Java?
I don't want to upgrade the PC as I love it the way it is and it's enough for everything else. I feel like I'd like to make a simple app for my phone occasionally (just for fun and to make something in my life easier) but emulation seems impossible.
i'm with you that android emulation is pretty slow, but your machine is reaching near parity with many of the devices that run the platform you're trying to emulate.
it's nearly always the case that an emulator host has to have a significant performance margin over the guest just to account for the inaccuracies/inconsistencies/hacks that are involved in mimicking a system on a dissimilar hardware platform.
I would be Ok emulating the slowest Android phone of those which existed some years ago - I bet it was much weaker than my actual laptop yet not nearly as slow as an emulation I actually get. I'm not interested in using NDK, 3D graphics or any bells-and-whistles in my Android apps.
Make sure to customize the AVD then and use a much lower resolution. If you pick a default like the Pixel 3 it'll be running at a resolution that really doesn't work if you're trying to share 4GB of RAM with a host OS, IDE, and whatever else you have open.
EDIT: Also make sure you've enabled VT-x if your laptop has it.
If you're targeting a fairly old Android release, that would probably be ok. But I'd bet to get decent performance on your hardware, you'd have to emulate a low-RAM, single-core phone that probably wouldn't be able to run a current Android release.
Most Android phones nowadays have more RAM than your laptop. That CPU is also 10+ years old and a phone running a Snapdragon 855 or 865 is going to perform 2-5x better depending on which exact chip you have and single vs multi core benchmarks. The Android emulator may not be the fastest thing in the world, but your 10 year old computer is certainly not helping.
I've never actually seen a smartphone with more than 4GB of RAM (mine has 2 GB and runs Android 9 in FullHD with perfect performance). Whatever, a huge amount of ram usually is used when running heavy apps. Running the basic system + a small app isn't supposed to take anything near 100% RAM or CPU.
An Android app I want to write is going to be a single form with some text fields and some buttons, it will have simple logic but it needs to run in background, be able to read a sensor, read/write some SQLite/CSV data and play a sound occasionally. No fancy graphics bells and whistles.
This seems like a very simple task to handle, I bet it's not going to take much RAM or CPU.
- Galaxy S9+ (2018): 6GB
- Galaxy S10 (2019): 6 or 8 GB
- Galaxy S10+ (2019): 8 or 12 GB
- Galaxy S20 (2020): 8 or 12 GB
- Galaxy S20 Ultra (2020): 12 or 16 GB
- OnePlus 3 (2016): 6 GB
- OnePlus 8 (2020): 8 or 12 GB
- LG G7 ThinQ (2018): 4 or 6 GB
- LG Velvet (2020): 8GB
- Pixel 4 (2019): 6GB
- Xiaomi Pocophone F1 (2018): 6 or 8GB
- Huawei Mate 20 (2018): 4 or 6 or 8GB
- Sony Xperia 1 (2019): 6GB
- Moto Razr (2020): 6GB
Pretty much every Android phone maker is producing phones with at least 6GB of memory now. The flagships are heading toward 12 and 16GB now. OnePlus had more than 4GB almost 4 years ago. Only Apple, who has the benefit optimizing its OS for a small set of hardware, seems to still be running on 4GB or less on all of its phones
You're trying to emulate an entire OS here. Android 10 requires a minimum of 2GB of RAM just on its own now. You have overhead for the emulation software itself on top of that, in addition to the memory needed by your computer's OS, your IDE, your browser, etc. Android Studio alone requires a minimum of 1GB and, at least in my experience, it will use that within 5 minutes of opening up.
Not to mention, even budget phones today are at least twice as fast as the CPU you're running.
You can't expect emulated software to run well on a computer that barely even meets the minimum requirements for that software. It's ludicrous.
> Pretty much every Android phone maker is producing phones with at least 6GB of memory now
Ok. I can see no reason to care about such though. What the heck do they need that for?
> You're trying to emulate an entire OS here.
But I emulate desktop Linuxes and Windows without problems - that's entire OSes too.
> Android 10 requires a minimum of 2GB of RAM just on its own now.
Why? What does it need this much for? Anyway, I can reserve 2GB of RAM now and I'm sure I'm not going to notice too much drop in performance.
> You can't expect emulated software to run well on a computer that barely even meets the minimum requirements for that software. It's ludicrous.
As for the app - I'm sure it can't require anything near that much. As for the OS - why does Android have to require so much more than Linux and Windows do?
BTW my actual phone (Galaxy Note 3) runs Android 9 on 3 GB RAM perfectly. I never tried 3D games but everything else is blazing fast. I've been updating it (Android 4-5-6-9) for years and never noticed a glitch.
> Ok. I can see no reason to care about such though. What the heck do they need that for?
> Why? What does it need this much for?
As a developer, that shouldn't matter to you. Only that it _does_ require that much. You can assume that your users will have 4GB of memory in almost all cases now, whether you agree with it or not. Your opinion on the _why_ is irrelevant, frankly.
If you can't provide that much to the emulator, you can't expect to run it smoothly. There is no magic solution to this. These things require resources that your computer does not have. If you want it to run better (or even at all), get better hardware. That's all there is to it.
Why, if absolutely everything except Android emulation (desktop Linux and Windows emulation work fine) and recent games (in which I am not interested) work perfectly? I've just replaced my HDD with an SSD a couple of years ago and it feels like a new PC again. I never experience even slightest discomfort when I'm not trying to emulate Android.
> Because emulation is a heavy operation. Be realistic.
Then why is desktop Linux and Windows emulation not nearly as slow? Windows 7+ emulation is slightly slow, especially running VisualStudio on an emulated Windows7 is painfully slow (while perfectly swift on non-emulated) but still usable if you have enough patience (Android emulation is absolutely unusable). Emulated (VirtualBox on a Linux host) Windows XP runs even faster than on bare metal.
> Also I don't believe that PC isn't painful to use for everything else. I had a core2duo a while back and it was slowish back then.
You probably did something wrong. It's not just painless, it's perfectly swift (both with the latest KDE Plasma and with Windows 7). I have a modern PC at the office (late pre-Ryzen AMD, 8GB RAM, Win10) and its perceived performance is the same. It even feels more slow (less responsive) than the Core2Duo occasionally but I blame that to Windows10.
Of course I only mean basic tasks like browsing the web, watching videos (1080p or less), juggling files and writing code (VisualStudio, IntelliJ) for small projects. Obviously any modern CPU will beat Core2Duo in video transcoding, modern games etc.
Even a current i5 is more than double as fast per core, and likely has 12 threads instead of 2. So as a roundabout figure let's say 8x the effective speed.
VT-D makes your Core2Duo better able to emulate Windows or Linux in hardware, but not Android.
Like I said, I don't believe that a Core2Duo isn't slow for everything you mentioned. I had one at home. I wasn't doing anything wrong - I know how fast it isn't - we had hundreds at my work. Have you noticed that other commenters are saying the same thing about Core2Duo speed as me? You must have more patience to wait for your PC than myself and the other commenters.
The reason I replaced my Core2Duo was because it was slow, and I replaced it around 8 years ago!
I only used it for Scala, never actually written anything beyond a Hello World in Java. Perhaps Scala takes less resources. I also use PyCharm actively and it works much faster on Linux than on Windows (on Windows it indeed is slow).
Yeah, I struggle to keep my RAM usage in PhpStorm below 1.5GB. And that's with me policing its indexing and what plugins are installed. There's no way that's a good experience.
Why does Android dev has to be this much heavier than any other task? I'm ok doing .Net, Scala and Python dev (always using the most powerful IDEs), some lo-fi machine learning, running Windows and Linux and heavy Windows apps in VirtualBox, but Android - can't emulate.
What I have resorted to is writing single-page JavaScript apps and running them in the browser so I can debug them in the host browser instead of an emulator.
Remember that we're talking about emulating an entire system here, which is very different from running an IDE (even considering that an IDE like IntelliJ is pretty heavy-weight).
Windows and Linux themselves in a VM is probably reasonably ok on your hardware because both of those OSes are built to run on non-current hardware (especially Linux; I'm running Debian buster with a current kernel on a circa 2010 single-core ARMv5 CPU with 256MB of RAM with no problems).
Android doesn't target old hardware; I wouldn't expect a current release to run all that well even on a four-year-old phone, let alone a 10+-year-old laptop with virtualization overhead.
Also the usual stuff applies: be sure you have your system's virtualization extensions enabled if it has any, use the x86 Android emulator image with hardware accel enabled, not the ARM one, etc. But that still may not be enough.
How are you doing all of those things with only 4GB of memory? My wife's old laptop had 3GB, and with about 10 Chrome tabs open, the computer would start swapping and quickly grind to a halt—with no other programs running in the foreground. I just checked memory usage on my own machine and Firefox alone is using over 3GB for me, and IntelliJ another 4GB. Granted, I know many programs tend to use more memory if it's available (for caching, etc), and I also hardly ever restart my machine, but still... Running a full dev environment with 4GB doesn't seem like it should be possible, so I'm curious how yours is so functional.
I usually have about 50 open Chrome tabs (on my Core2Duo with 4GB RAM). Half of them YouTube (I mostly watch 360p though, I only switch to 1080p when I watch somebody coding). This could get slightly slow occasionally when I used a HDD, not even slightly slow now as I switched to an SSD. I usually have PyCharm open so I code at the same time while watching something. I only experience some minor slowness when I run a system (Manjaro) update in the background and when PyCharm re-indexes libraries after an update.
Does your Core 2 Duo processor support hardware virtualization extensions (VT-x)? Some do, some dont. If it doesnt, you'd get no acceleration in the Android Emulator, so it would be expected to be pretty slow.
Mine does and I wouldn't be shocked by Android being that slow if normal Linuxes didn't work perfectly. People say that's because Android lacks proper virtual video card drivers.
Modern Android devices have almost similar (or sightly less on low-end) specs as your laptop. And your laptop should run IDE and emulator. No reason to run comfortably.
You miss a point: an app I need to run in the emulator is extremely simple, almost "hello world"-like. The OS itself shouldn't take much resources. I would expect 99% of the CPU and 80% of the RAM to be unused when running such a humble app on a bare OS. Another point is I actually have VT-x so x86 Linux and x86 Windows can normally be emulated without much overhead but x86 Android is a different story.
It sounds to me like you had the emulator setup incorrectly TBH. It's bad but not that bad. Make sure you're using an x86 image, have hardware emulation enabled, have the virtualization drivers installed, etc etc.
You can buy yourself 32 GB of RAM only if your motherboard chipset supports that much. If it does - then that can be really cheap, especially if you consider buying used RAM - it usually works ok forever once it passes MemTest86+.
A word of caution, a few times I've gone to bed with Android Studio and the emulator up; and woken up with my machine's fans running full speed and the thing about to melt down; the emulator can do funky things like take 600% of your cpu for no apparent reason after sitting there idle for hours.
Hi, what host OS are you running? I'd like to be able to send over a build with symbols so we can profile it (or on macos use the process sampler). Let's also try `adb shell top` next time this happens; we found that when networking state changes, sometimes, the virtual radio gets in a bad state and starts spinning.
# when it spins:
# method 1: go to activity monitor -> sample process, send over the result
# method 2: lldb
lldb
process attach --pid <pid-of-qemu-system-<arch>>
bt all #send it over
# method 3: guest process taking CPU
adb shell top
Interesting, sounds like a live loop in some audio thread somewhere. What's your audio setup and AVD config.ini? I might be able to reproduce on my Mac.
> I never had any issues with performance (with hardware acceleration). May be on extremely weak laptops it might cause issues, but Android Studio probably will be unusable there at all.
I have to use MacBook 13 (2019) in my current project, and it's having troubles with the emulator. Not sure if it's about the hardware or macOS inefficiencies, but I can't wait to get back to my Linux desktop where the emulator is buttery smooth, almost feels like a part of the system itself.
What does that mean? The x86 system image can run arm code? I will try that (might get confusing if there is an arm and x86 lib in the apk).
I have been using the image without google apis to save space, because my sdd is almost full
Does it reject invalid arm? I wrote my app in Pascal, and the FreePascal compiler often generates invalid assembly. Especially because I need to use the nightly build, because the last stable release does not support aarch64
Yes, the x86 system image can run ARM code. Note that the speed will be much faster than running a full ARM emulated system image because only the guest userspace bits need to be translated from ARM to x86. System calls are still marshalled to x86 calls, so memory accesses are many many times faster because (there's no need to drop into emulating the MMU).
(Also AFAIK illegal instructions are still validated and trapped as SIGILL or SIGSEGV)
iOS simulator can leverage the host's macOS libraries and kernel. Android doesn't share anything at all with Windows or macOS, and shares only the kernel at best with Linux. The best you can hope for is running Android in a container on a host Linux system, or via WSL. Unless you can get containers running natively on macOS, or run a Linux kernel a la WSL, there will need to be a layer of emulation to run Android.
While the iOS Simulator does share the host kernel it does not share libraries. It is effectively a separate userspace, with its own launchd_sim, its own mach bootstrap namespace, and so on.
The only exception to that is libsystem_{pthread,kernel,platform} because those form the kernel's ABI boundary. Libraries like libc, libdispatch, et al are the iOS build and use the iOS ABI.
Thanks for the clarification, I was under the impression that they shared libraries like the browser engine and what not. Not sure where I got that misconception.
An Android app is a Java app using a set of special APIs. Why can't it run on a host JVM given some libraries implementing the same APIs? I'm not interested in the NDK.
> Why can't it run on a host JVM given some libraries implementing the same APIs?
That's a hideously huge API surface to double-implement - with a new and unique set of bugs reducing it's usefulness in CI builds - for slightly improved performance in a relatively niche dev edge case (!ndk android dev without android hardware.) You'd also need to skip conversion to DEX format (or convert back) to run on a standard JVM.
But your suggestion isn't technically impossible. You could (re)implement the subset of APIs your application happens to use yourself, if you really wanted to go down that route - which by virtue of being a much smaller subset of the APIs than the entire Android Java API surface, wouldn't even be practically impossible.
I think it has issues with either GPU acceleration, or with saving state and resuming the emulator. It cannot do both, so you have to choose between fast startup and fast usage.
During typing this comment I nearly lost my mind because my macbook shift key is playing up and other keys are doing key repeats. I think I might go insane, stupid butterfly design
You can run Android in a container at native speed. You can also use containerized Android with binfmt_misc and libhoudini to run foreign binaries without having to run the entire Android OS in an emulator.
Oh, on those. Compared to iOS and ViewControllers, they just isn't a comparison. Starting them? Oh, can I do new Activity()? Nope, use this Intent thing. Rotation? Good luck! Passing data around? I have to use a bundle? Why can't I set a variable inside, like any other object?
How do you set those variables? Do you keep a reference to your activity or fragment inside another screen? Do you clean up that reference on configuration change, or let it leak and/or cause crashes?
Your best bet to pass data around is to... not pass it but rather use a sophisticated DI setup where you can set up shared objects that live in a certain scope. Of course, you can add parameters to a bundle with some boilerplate, but that's not a great idea if the serialization overhead hurts you.
I only see one reply remotely regarding Activites and Fragments and it's just asking what problems you had. It should be pretty obvious to anyone that's tinkered with Android that the whole battle between which to choose is ridiculous.
That said - I'd highly recommend checking out the Navigation components that are a part of Jetpack. They make standard navigation patterns, including deep linking, a complete breeze.
Oh, my team has tried the new stuff. It's terrible. Especially with the viewmodel stuff. Hidden bugs, no documentation, constant API changes. Hell, you couldn't scope a viewmodel to a nav graph until recently, which defeats the entire purpose of the whole thing.
It's just frustrating that 10 years later, we still have nothing that is remotely capable of the things an iOS ViewController can do.
I’m still amazed and somewhat annoyed by the filesize of Android Studio. And as you keep adding simulators and what not (which inevitably happens), before you know it, Android Studio occupies 20% of your entire disk space.
Same for Visual Studio (the full one, not VSCode). I think this just the nature of these giant IDEs that let you target multiple OS versions: if you want to faithfully target/simulate multiple operating systems, you need both the OS binaries and the platform libraries for each of them. As the saying goes, couple gigs here, a couple gigs there, and soon you're talking about real disk space.
VS can be pretty small, but there are 1000's of optionals. My 2019 Enterprise with lots of boxes checked is 6Gb (in the program dir, it also has some runtimes etc outside.).
Considering the last patch for just one of the games I play was 50+ Gb, it's pretty reasonable...
If you'll carefully maintain your Android Studio installation, it's not that bad. People usually installing multiple SDKs, images which take quite a lot of space.
The system images are the big ones. The SDK + sources for a given platform isn't that big, like ~100MB. The system image + saved emulator state is easily ~2-5GB though.
Check out Zig: in less than 50 MB, it supports cross-compiling on any major os/arch for any other major os/arch, and it compiles most plain C code, and it includes libc headers for linux/mac/windows.
50 million bytes is really a lot of bytes, it's just that everyone has become accustomed to orders-of-magnitude bloat, and doesn't realize the 100x duplication and inefficiency which sneaks into just about every part of computing where it can get away with it.
This appears to just be a programming language with very limited frameworks compared to those provided by Android and iOS. For example, where are the GUI toolkits, audio toolkits, robust storage frameworks, 3D capabilities, etc? This doesn't seem like an accurate comparison at all. Sure, you can get down to 50MB if you throw out all the features.
... because it was claimed that the IDEs are many gigabytes because they "let you target multiple OS versions". Well, you can get the required essence for targeting multiple operating systems and architectures in 50 MB. If a "proper IDE" for a single OS/arch/platform is 500 MB, then one for "multiple" could be about 550 MB. So how does it get up to 5 GB or more? Convenience and junk.
Just because Zig can compile for multiple platforms, doesn't mean it can target those platforms like those IDEs could.
For example, with Visual Studio, a large part of the installation footprint are the various SDKs you need to actually build a realistic Windows application (that interacts with the OS, maybe does some COM stuff, brings up a GUI, uses networking..etc) There are also multiple versions of these SDKs, especially the Windows SDKs. Visual Studio also supports a lot of languages, and for some of these languages, there is support going back to very old versions and tech stacks. You can still build Windows XP compatible applications in modern Visual Studio versions.
Can Zig talk to COM? Can it set up a DXGI swap chain? Can I pop up notification toasts as simple as calling one function? How do you package Zig apps for the Microsoft Store?
So yes, you can compile Zig programs for multiple platforms, and these can run on those platforms. But once you start getting into actually interacting with those platforms, you will need all those things you call convenience and junk. Libc is just a fraction of what you need to make applications for a platform.
Visual Studio has a big footprint only if you choose to install everything. I don't see the point in doing so. Are you developing for every available platform and in more than one language?
In fairness here you probably need to add the Android SDK, NDK, and an emulator image or two, to match what iOS is giving in xcode. That isn't going to make the difference up, but from memory the NDK is around a 1.1 GB compressed download or so (if you need it), and you'll probably need at least one or two SDK versions for the releases you target.
I'd be worried running Mac/ Windows on that thing, meanwhile he's unhappy with running an OS and an emulated OS, both targeting systems with more power than that...
1. The selected Java SDK API desugaring, though limited, is much awaited. Little things like supporting `java.time` up until API Level 14 make it less painful, don't have to end up using Joda Time even though it officially is not developed anymore.
2. Good to see Google focus on animations finally
3. The CPU profiler looks amazing
The transition to androidx has been slow and painful. But overall Android dev experience is improving consistently
Can you cite some examples where UI animations aren't counterproductive, annoying, amateur-hour crap? Swiping screens away to convey a stacked hierarchy is useful, but beyond that I struggle to imagine what I'd do with them.
There's a reason we hated Flash-based Web sites, and animation was a huge part of it.
Now I'm considering which is worse: animated UI, or transparent UI. The asinine transparent-UI fad largely and deservedly died in the '00s, with the exception of Apple suddenly jumping on its discredited and defunct bandwagon years later.
Don't see why we need to bring back animated UI either, but I'm happy to consider suggested uses.
Speaking of stacked hierarchies, the rotatable 3-D display of views in a UI might be pretty useful. I like that idea.
I miss doing my designs visually, I used storyboards again for a demo app and getting the layout working for all screen sizes just worked once I got the layout right in interface builder.
With coded constraints you need to do that change code / restart cycle, especially with more complex layouts and SwiftUI isn’t there yet.
Those motion guides look like what Flash used to have, dearly missed.
That’s interesting, how common is it to write C++ for Android applications, and with Android Studio? I’m not that informed regarding android stuff but I understood that the native development kit wasn’t really that well maintained, making C++ an 2nd class citizen when compared to Java (and maybe Kotlin?).
I am doing it right now, for a computer vision app. CMake is kind of a pain, but the rest of it works fine.
My one other work-related Android app also used C++, for computer vision-related stuff, 8 years ago. Android Studio/Kotlin/C++ is in my very subjective opinion about 20% better than 2012 Eclipse/Java/C++.
If I wanted to do basically everything in C++, I would strongly consider Qt.
You can also use Rust (and some big companies are - Cloudflare is shipping Rust on Android/iOS for their warp VPN, and Dropbox uses it in their desktop sync client (not sure if the same code is also used on mobile)).
C++ for the UI is extremely uncommon, and very second-class citizen. C++ for parts of an app, though, is common enough.
For example Facebook's Litho is a Java API, and uses a large chunk of the Android SDK's systems. But it also uses a layout engine in C++. This hybrid approach isn't uncommon, and Android Studio has some nice features to work in this mode like being able to follow usages across the JNI boundary and to auto-generate the JNI stubs.
Games used to be coded in C++, in Visual Studio or other IDE, as can be ported to all platforms and Java was used for a thin Android specific layer. If your game breaks in Android then you need to use Android Studio for debugging.
It works perfectly well. But, nowadays has been superseded by the prevalence of Unity 3D.
From Google's point of view, C and C++ should only be used for writing native methods, not full blown applications.
Naturally via this mechanism you can also write portable business code that is exposed via Java bindings library.
Or you invert the logic, and the application is written as a library, with Java/Kotlin just doing the OS connection points and having the NDK logic somehow driving application.
Vulkan and real time audio are also only available via the NDK, to use them from Java/Kotlin you need to write your own bindings.
I used it to create a bridge between a C library for audio encoding and my android application. So I assume people use it to create JNI wrappers around third party native libraries.
It's really interesting to me that, in the end, we build interactive screens. Consider all the complexity it takes to build one of those for Android (or iOS), and what it takes to make a a screen for the browser. One requires a whole stack of big, complicated, brittle tools. The other can be achieved in a single html file.
It really depends where you want to draw the line between the code which is part of the "OS" and the part that is the "app". The web platform puts as much as possible into the "OS", while android has far less, so the app has to include much more.
I'm not a defender of the "modern web" recursive dependencies approach to building js sites, but the one thing the web platform seems to do better than Android is let the developer focus more onto their own logic, rather than reimplementing basic boilerplate activities. Sadly I've never really found a "one layer higher" overlay for the Android APIs to let developers take advantage of good boilerplates by default - I'd rather not pretend I'll implement data loading from an array or db into a scroll view better than someone else that has thought it through and understands the platform, because I know I won't!
I will take a look at Jetpack - it does sound promising as the "next layer up". I was hoping Flutter might try to fill this space - it feels to me that there's no inherent need for the "higher layer" to be platform specific, although there's probably an impetus for Google/Apple to make it so, from the perspective of ecosystem control, developer mindshare, and simply reducing effort to build and maintain.
At least skimming the docs and watching the introduction videos, it does sounds like JetPack might be what I'm looking for! Thanks
I think there are 2 issues with Android and iOS development here - first is the steepness of the "on ramp". I learned web development using the "edit html page in text editor, switch to browser, press F5 and look" approach. That was really it unless you wanted to wait a few years then go down the FrontPage route. Android and iOS have a much steeper on-ramp, and the work required to get something simple (but not straight from the docs as an example) on your screen is considerably higher than in HTML. Part of this is due to web browsers benefitting from a full stack and complex browser, as another commenter said.
The second issue is around the abstractions used to make apps - I'll speak for Android here but I believe iOS is quite similar in outline. In Android, abstractions are incredibly leaky and require you to understand much more about the internal workings than HTML does. I can build a site in HTML without worrying about how the browser implements the functionality - for example, I can make an un-numbered list, or create a list box and populate it with items, fairly easily, even doing so dynamically from JavaScript.
In contrast, on Android, you need to start understanding ListView versus RecyclerView (apparently the latter is better to use?). Then to understand Adapters. Then to figure out how to get content into that ContentAdapter. At some point invariably you'll discover issues with whether your code is running on the UI thread (and blocking the UI from updating while it runs, which is obviously bad), or running off the UI thread (and thus unable to update the UI elements)... You can't (at least in my experience) just get down to focusing on your business logic, without reinventing the wheel a fair bit here. You'll also need to discover the "de-facto go-to library" sometimes, where the built in API isn't good. Bluetooth springs to mind, where I believe most people would recommend using a third party library rather than the included one. That's a discoverability challenge.
I don't think we should take this level of control away from anyone wanting it, but I think it's important we recognise mobile development seems to favour the approach of "bring your own batteries, soldering iron, and flux, as we don't provide any", whereas web has some available batteries and equipment you are free to use, or alternatively to build your own. It would be nice to see some default "batteries" pre-installed, so people can get started without having to learn the details of all the leaky abstractions needed to understand how to get an app to show up and respond to input etc.
That's interesting. I am an iOS developer, and find development for browser deployment to be a mountain of hokey crap and flavor-of-the-week frameworks all so you can kind of cajole MOST browsers to more-or-less show what you want where you want it on the screen.
And then you have to write all the serialization and back-end support on the server side, or use yet another flavor-of-the-week stack to broker all of the back-and-forth.
But all of that is optional stuff for the web, even though it isn't presented as such. You don't actually need a transpiler, bundler, fancy libraries etc to write interactive programs for the browser. You really can do everything in an HTML file with some html, javascript, and some css. You can draw pictures declaratively with svg, or programmatically with canvas. And so on. You can see some nice work done in this way in places like codepen.io
Oh the browser/JS world is far from perfect too - the NIH-isms and re-invention of the wheel in stacked and layered frameworks is definitely a problem on the web.
I think it will come down to the level of abstraction that works and makes sense, while helping enable building things, though still focusing on the business logic, not the boilerplate.
I guess this is still one of the many un-solved problems!
> At some point invariably you'll discover issues with whether your code is running on the UI thread (and blocking the UI from updating while it runs, which is obviously bad), or running off the UI thread (and thus unable to update the UI elements)...
FWIW, this exact same issue comes
up in web development, only it is harder to spawn threads (in this case, web workers) to solve issues.
I've built many useful tools as single HTML files. And several of my larger projects have started as single HTML files and stayed that way for quite a while. Even when they get bigger they don't grow all the cruft that a "Hello World" Android app has right out of the new project wizard.
I think the issue is just about app scale. Android development is designed around medium or large apps, and doesn't at all try to optimize for small apps, and certainly not a Hello World app.
And honestly, I'd prefer that an IDE generate a Hello World app with all the random stuff in it that a full-blown app would need, even if it isn't required for Hello World. It makes the learning experience a lot more useful.
Single HTML files are a nice unit of component for building UIs. See the BrickLink store-front. Each part is defined in its own HTML file and then these are concatenated before being served. At dev time each component can be put through its paces in isolation ("explore the characteristic state space"). At runtime you can `clone()` the DOM to make more, modify it and so on. It worked really well.
For anything complex, your "single html file" would look horrible.
If anything doing iOS work after decade and a half of web was a breath of the fresh air and felt way less brittle.
Not even close. You need a whole directory tree of random junk and a whole passel of build tools before that XML file does anything. Meanwhile on the web I can open any text editor, type some random HTML without any boilerplate, possibly even with some inline JavaScript code, save it with a .html extension and it's already a single page app.
As a bonus, all the tools to debug it (even with a device simulator or remotely on a real Android device) are already installed as part of the browser, and the browser itself is a download of less than 100 MB.
It literally takes a minute or two to create an entire Hello World android application. At that point, all you have to do it edit your one xml file. The IDE does as much for me as the browser does for you.
The is the classic web vs native debate. What are we optimizing for? Making it as easy as possible for someone with no experience to throw something together? Sure, the web is great for that. I would rather prioritize performance and the user experience.
I spun up an Android project 2 days ago and it took me much longer. I already had IntelliJ installed. So I had to answer all the prompts about API level and emulator settings, but then gradle completely failed. It took me an hour to research and fix it - it turns out recent versions of Java (>9) spit out their version in a different format. Ultimately the solution was to configure IntelliJ to use an older java to run gradle.
The cool thing about the browser is that you don't need other tools. In a sense, the browser has a build step - when the root html file is loaded, it's sub resources are loaded as late as possible, and combined! If you really need more control over this "build" process, you can acquire programmatic control using XHR. What's even cooler is that you can dig into at ANY browser program at runtime! Can you do that on Android or iOS without specialized tools?
I haven't touched Android development in... Maybe 7 years? And even then, wasn't really in the trenches. I have to imagine boat loads have changed since then.
If I wanted to poke around with some hobby projects, where would be the best place to start from scratch with all the best practices and development tools?
read the kotlin koans (assuming you are not familiar with kotlin), they are the best way to get used to that language if you know java, and kotlin is much more pleasant to use than java
Best practices and tools might be harder, a LOT has changed in the past 7 years.
After that, it depends whether your hobby is to write an android app, or if the resulting product is hobby.
if it is the former, Compose is lots of fun and will be a big leap in android development. You can already use the dev version.
MVI has also deeply changed how I code Android apps. Again, maybe not needed for the level of complexity of an hobby app unless you want to toy with something different.
MVI stands for Model-View-Intent.
It's a state management pattern that uses unidirectional flow.
Basically you send events (Intents, similar to Redux actions, not to be confused with the Android Intent class..) to your Model, which does a state update and sends out the updated state to the View.
It does sound very familiar !
Yeah there is nothing new there (and MVI on Android having been heavily influenced by what's going on on other platforms is readily acknowledged).
It is newish (it has been there for several years now) as far as "trendy" Android architecture patterns go.
We started with a bad implementation of MVC for several reasons :
- what Google ended with (which does not look like it fits what they initially intended, but I digress) with its god Activities has encouraged the community to go in that direction.
- when Android was first released, the smartphones of the time were heavily CPU and memory constrained. The way applications were built was to best fit these constraints (sometimes in naive ways).
And again, I am not advocating for all problems to use MVI or that it is the one architecture to rule them all. Just something interesting that was very likely not in gp mind 7 years ago.
to add on top of what lubonay wrote, this unidirectional data flow allows to solve a wide range of problem in how we handle state (and its changes).
so you have
- the state, only containing pure java/kotlin objects)
- reducers to handle state changes. Basically a reducer is a method that takes a state as input (and often another parameter, like e.g. an user input) and creates another state as output.
- and a model of your ui, that is created from each state.
This works very well to have fully testable business logic (which is where the no Android objects is nice, so tests can just run on the JVM) and think about how both state and view changes (both state and view states are immutable, and are altered by copy).
Again, not necessarily something I say that grandparent absolutely needs, but if they want to try something new, it is there.
The code it generates leaves a lot to be desired. It's highly non-idiomatic, and has the marks of a computer-generated structure all over it. This being said, it certainly facilitates the transition from Java to Kotlin as it makes for a convenient learning tool (used on small portions of code).
ABI stability. You don't need to ship the whole runtime with it anymore. Though if you want a framework that works nice with both languages and older versions Objective-C is still better.
As an Android dev, I disregarded any company hiring that was not transitioning to Kotlin. It was a major red flag if the project was relatively new. My personal project on spring boot is completely powered by Kotlin and I fully expect KotlinJS to be ready for production in a year or so for full stack development.
Writing in Kotlin really is a nicer experience and with near full access to all the legacy java libraries I would never want to switch back even with the small improvements java has made to be more like Kotlin. I highly suspect as the number of java devs that are exposed to Kotlin increases the number of devs happy to write in Java over Kotlin (or Scala) will decrease. Android going Kotlin first was in part justified by giving the decision makers a short period of a week or two to get adjusted away from java and see how they felt about the language in comparison
I agree with everything you say, but that transition will take time. I think Kotlin is a great language, and combines a lot of the best strengths of others while not giving up that JVM tooling goodness. However, I think you still need to have good Java chops to work with Kotlin, at least to deal with those dependencies. In fact, its almost exactly the situation that TypeScript has: yes, its a good language, but you can't really use it well unless you know JavaScript (and in particular, ES6) really well. Edit: sorry, I'm really talking more generally than Android. You're probably right about hiring for Kotlin for Android instead of Java. But Kotlin is targeted at everything, and for that I think it's probably good to wait a bit.
I'm not a 100% sure since I only did basic Scala intro, but it seems like it has a non-trivial learning curve coming from Java - for ex. I was a C# dev with FP experience and it stil took me a while to figure out the basics in Scala. With Kotlin, I barely knew Java and I just used autoconvert and google to simplify my Java code when I was writing some android native stuff.
This sucks so hard. Most Java libraries are planning to drop support for Java 8 when Java 17 hits LTS. This will completely divide the language into two, one for Android and the other for everything else.
Google could just run OpenJDK on phones and toss all the garbage they built, but they're too proud. The GC and performance of ART isn't anywhere near OpenJDK
So you're forced to learn a new language? I was obliged to write code for a university project in Kotlin instead of Java.
Most of the code was already written in Java. So I just used the tool to convert to Kotlin. Then I tried to infer how Kotlin works based on the generated code to write the rest. I don't feel proud about that project.
I wish Google would stop farting out new languages every few months and focus on Kotlin. If they don't want to commit to any of them, they should throw in the towel, adopt Swift, and be done with it. The continual flailing is goddamned annoying.
On android there is absolutely no sign that there is anything besides a full commitment to Kotlin. There are no new docs put out without Kotlin code and all the major libraries have ktx variants that allow for better idiomatic Kotlin code.
In the community one of the easiest ways to spot if an example is old and probably out dated is if the author wrote it in Java. New things people aren't even bother keeping up the guise that Java is relevant to new things.
Also Kotlin isn't created by Google and I personally would rather see Kotlin come out on top if we are directly comparing it to Swift.
Then what are you referring to? The only languages Google has released are Dart & Go, which are 8 & 10 years old respectively. That doesn't remotely qualify as "farting out new languages every few months"
Because you have a team of Java developers, or most of the tooling you are gonna use is around Java (eg: data streaming). My team had to write some flink, luckily in python you can do some now.
Modern versions are not as bad (to me) you can use val for type inferring. If irc they are implementing stuff similar to scala.
It's used quite a lot in enterprise as well.
Note: by no means I'm a Java dev, I learnt to hate it a little less.
IMO it's mostly syntactic convenience and elimination of boilerplate. The Kotlin collections APIs are more convenient than the streams APIs. Data classes are convenient. Nullability is significantly more convenient than Optional.
And you get all that even if you have to target old JVM versions for some reason.
Optional<T> is itself a reference and can itself be null. That effectively torpedos the entire concept out of the gate. Nullity being part of the data type is a huge difference, and exposes so many "I didn't know that could be null" spaces.
I've seen both Optional<?> return null (inexperienced dev) and somebody arguing we should defensively check for null Optionals as well (experienced dev).
What you're arguing is a false equivalence. In practice null safety isn't a problem when developing in Kotlin while it is a big problem in Java.
PMD itself is no standard though. Best standards are enforced in compiler anyway.
> Not everything needs getters and setters, it is just cargo cult generating them blindly.
JavaBean spec (with getters and setters) is actual standard. It's purely theoretical argument that you don't have to use them while in reality they are used always and everywhere and any PR without them would be shot down immediately.
Setters/getters are just one example of Java's boilerplateness.
> Static analysis like PMD is also a basic need, not using something like Sonar as part of a CI/CD is just being careless.
Yes, but I want that feedback during coding and not in CI/CD. Yeah, I can set it up in IDE, but that's more configuration, more things to break up, more team wide standards. Why not just use Kotlin?
I'm no Java hater, I code Java for most of the day professionally. But why pretend that it's perfect? It's 25 years old with unfixable mistakes built in and ripe for replacement.
> JavaBean spec (with getters and setters) is actual standard. It's purely theoretical argument that you don't have to use them while in reality they are used always and everywhere and any PR without them would be shot down immediately.
Sounds like a strict "your company" problem. No one needs to use the JavaBeans spec. It's fully optional.
> Yes, but I want that feedback during coding and not in CI/CD. Yeah, I can set it up in IDE, but that's more configuration, more things to break up, more team wide standards. Why not just use Kotlin?
You make a very weak case here for Kotlin since you need a static analyzer anyway (e.g. Detekt).
> I'm no Java hater, I code Java for most of the day professionally. But why pretend that it's perfect? It's 25 years old with unfixable mistakes built in and ripe for replacement.
No one pretends Java is perfect. For many projects I like using Kotlin. But there's also substantial investment in Java and "I have to type a bit less" is usually not enough to justify using Kotlin.
> Sounds like a strict "your company" problem. No one needs to use the JavaBeans spec. It's fully optional.
Your company - maybe I've been working in the wrong companies in the past 15 years, because Java's setters and getters have been used everywhere. Also look into some open source projects - how often do you see direct access to (non-final, non-package private) instance variables? Can't really recall a single case. For a good reason - exposing field access on a public interface violates encapsulation.
> You make a very weak case here for Kotlin since you need a static analyzer anyway (e.g. Detekt).
Of course static analyzer is necessary. That does not invalidate the claim that null safety guaranteed by a compiler is much faster/more effective.
> "I have to type a bit less" is usually not enough to justify using Kotlin.
Type less is a small price in typical large scale project. The big win is better readability since you don't have to scroll over tons of boilerplate or artificial abstractions caused by Java's rigidness.
Besides that NPEs are still happening in production all over the world so something is obviously "not ideal".
And when they become available, Java will have them on day one, while Kotlin who knows, specially when it needs to stay compatible with ART.
Kotlin designers will have eventually to face the reality that they are either an Android only language, or that Kotlin multi-platform will need to differentiate between what is supported on JVM and ART.
Multiplattform, tons of libs and tools, statically compiled, supported by major IDEs, mature dependency management (maven, gradle), backed up a major player (Oracle). I’m sure there’s more.
It does, but I could only get it to trigger in certain circumstances (I think it was upon pasting in Java code into a Kotlin file)
What I really wanted with regards to that feature was a textbuffer that can accept arbitrary fragments and translate those to Kotlin without committing anything to disk. But it has been a few years, I should check back on that
I was asking in general not just about Java in relation to Android. Java has always felt like a clunky and closed language under the dictatorship of Oracle. Why not C++ or Rust? Does Java have any benefits that these two lack?
I'm probably ignorant here. I haven't coded in Java for 5+ years.
Indeed, had it not been for Oracle's work and Java would have been stuck at version 6, Maxime would never been made into GraalVM, all AOT compilers for Java would still be only available via commercial JDK, (almost) zero pause GC with support for TB heaps, JNI replacement, support for SIMD/AVX512, incoming support for value types and rewrite of C++ stuff into Java would never have happened.
People like to talk and bash Oracle, but no one else, including Google bothered with Sun assets, and the community on their own would never made the improvements that Oracle has made in the last 15 years, if the way of Python and Ruby are any indication of how runtime improvements get made.
I'm sure the push for Kotlin probably has something to do with the bitter lawsuit they're currently embroiled in, rather than a particular love for that language.
Why buy the cow when you can get the (goat) milk for free? Before Sun was up for sale, Google chose to base its not-Java™ implementation on Apache Harmony (incubated at IBM).
There was no way of knowing the farmer who'd buy the cow would claim goat milk infringes cow-milk protein IP (apologies for stretching the metaphor beyond breaking)
Also what many keep forgeting is that Oracle was with Sun on Java since the early days, they were even the first RDMS to support Java for stored procedures and designed the Network Computer with JavaOS together with Sun.
From my point of view, Google chose not to buy Sun because they hoped that Sun would just burn to the ground and as such, they would get away with the way they approached the whole situation.
On the other hand if Google had bought Sun, we would still most likely be stuck with Java 6 and just runtime improvements, if at all. If Go and Dart are any examples to go for.
There are plenty of Java APIs which won't ever be implemented on Android. For example AWT, Swing.
That's why it should not be called Java at all. Sun sued Microsoft for J++ because their implementation was incomplete. Google got away with that. Java is more than language and few classes from the standard library.
Wait, why? Won't that outcome turn FOSS into a legal quagmire?
As now everyone will rush to copyright "common" API signatures, and lawsuit troll anyone found in violation of the copyright?
"I see your FOSS uses `parse(string)`. We own the copyright on that API, please buy a license or we'll sue."
So then every API would become i.e. `umvi_parse(string)` to avoid paying. I'm sure all sorts of auto-prefixer tools would pop up to help automate this, but still...
It probably wouldn't be that bad: there's already copyright doctrines for situations where something is so commonly used that it can't be protected, or where there's only one reasonable way to express a particular idea. Plus, copyright protection is usually not extended to extremely short phrases or snippets of text, so even in an "APIs are copyrightable" world, "parse(string)" is probably OK. (Of course, I only said probably, which is part of the problem.)
The Oracle argument (and as things currently stand in their case, the accepted argument) is basically that the API taken as a whole can be protected even if each individual function name can't, so you could protect the "Java API" but not "compare(a, b)". The idea of copyright arising from "structure, sequence and organization" is well established in the computing context as well as others (for example, you can hold copyright in a compilation even if the individual things being compiled aren't themselves copyrightable).
Still, API copyright would lead to all sorts of uncertainty at least at first, and certainly puts compatible implementations or reimplementations in the crosshairs, so I'd really prefer an Oracle loss here.
> The Oracle argument (and as things currently stand in their case, the accepted argument) is basically that the API taken as a whole can be protected even if each individual function name can't
And let's be honest, that does make some sense. We've all designed APIs that were particularly elegant and that we were proud of as something almost artistically beautiful. Surely APIs as a whole can constitute some kind of intellectual property.
I'm not sure if copyright is the right way to protect APIs, mostly because copyright lasts for an absolute eternity in software engineering timeframes. But I do think that there ought to be some protection against others copying an API (unless you license it e.g. as open-source), although interfacing with an API should always be allowed.
Are other types of interfaces copyrightable? I think that should be what sets the precedent.
For example, would I be able to create an airplane with the same cockpit layout (same button/lever positionings, etc) as Boeing so that pilots don't have to learn a completely different layout to fly Umvi brand planes?
And the nightmare of determining who owns the APIs in things like SQL, BSD, POSIX, libc, speadsheet functions, the cairo and HTML canvas path APIs, HTML DOM, and more. Not to mention compatibility projects like wine, DOSBox, and various emulators. If printf is copyrighted, it will affect C, PHP, python, the command line application, and others. Then there are things like where C# is clearly influenced by Java.
The whole programming landscape would need to be audited.
A large majority of those APIs are protected by ISO, ECMA, OpenGroup with their respective copyrights and many of them aren't available for free, specially ISO ones.
You just them as free beer, because a couple of developers have decided to put their money for you, specially the ISO ones.
Such a hot take. This is not reddit. Please expand on your thoughts and articulate it clearly. I've not yet heard of a single convincing argument in favor of Oracle.
The CAFC's rulings spell it out pretty plainly. There is sufficient original creative work in the choices underpinning the design of the Java APIs that they are eligible for copyright. There are no external forces compelling the Java APIs to conform to the exact shape they now hold, and the same systems and processes could be expressed in other ways. Therefore, the "system or method" exemption to copyrightability does not hold for the APIs.
By copying Java's APIs in a non-interoperable way, Google has diluted the Java ecosystem and sown confusion amongst users of the language, thus diminished the value of the IP held by Oracle, and Oracle is entitled to damages.
The CAFC's ruling was only clear because of the amazing amount of 9th circuit precedent they left out, going off and making their own law for some reason.
Is android studio same as having IntelliJ ultimate with android plugins? Is it a form of the community edition? Will I miss features getting into IntelliJ because it's a fork?
Me and my team made that plugin. It initially came from the same org that makes Bazel (rather than the other way around), but I think there's now a small team in the Android org that support it.
Nevertheless, Gradle is currently much more popular than Bazel for Android development so it makes sense that this is where the focus is.
Flutter on studio 4 is working fine for me onlinux, just had to enable and upgrade the flutter and dart plugins, only tried on flutter master channel with linux and web devices.
The new layout tools look great, I'm excited to try them. Especially the revamped Layout Inspector.
Also I noticed that the final 3.x minor release was much faster on my Mac than any of the others before it. I hope 4.0 keeps or improves the performance.
You could do that in basically any video editor that supports multiple layers and masks. Blender is an open source (3D) editor where you can do this with.
Most likely they are using one of/combination of: After Effects, Premiere Pro, DaVinci Resolve, iMovie and/or Final Cut Pro as those are the more standard tools in the industry.
I'm curious: Why would you use Blender for this? I consider it a 3-D modeling tool, not a video editor.
Then again, I just started using it. So far I'm pleasantly surprised at its UI. After using the detestable UI in GIMP (and the somewhat hokey-looking one in Audacity), I didn't expect much from open source.
> I'm curious: Why would you use Blender for this? I consider it a 3-D modeling tool, not a video editor.
Because it has a nice video editor in it, that's useful for lots of things.
Might be hard to get into if you never used Blender before. But if you've done 3D modelling in Blender, you'll be pleasantly surprised that your intuition of how to move, scale and generally edit stuff works the same in the video editor as in the rest of Blender.
I use Blender for video editng because I already know how to use it, I already have it installed on my computers, and because it has never been inadeqate. Blender doesn't have a ton of effects in the video editor itself, but you can replicate those effects in the 3D editor and in the node compositor.
It has a video editor that's actually quite good, and designed for fine-tuned output, as opposed to most free software available for video editing, which is either of low-quality or has very little output or compositing control.
Yeah, but the animation editing resembles of the same thing done in MM Flash. And I haven't seen that in many many years. That's where the thought came from, not anything else.
It does make sense. Flash was used to create a whole generation of shitty, animated UI. I wonder how many millions of man-hours were wasted waiting for pointless animations to finish so you could click a button or enter some text.
Or finally fix the activities and fragment mess?