There are also diffs adding lambda support, tweaking various classes for compatibility with applications that use reflection to access internal capabilities, and fixing lots OpenJDK compatibility bugs.
Android still needs to run dex bytecode somehow, so there are two possibilities for how N will work.
Option one is that Android stick with ART and replaces Harmony with OpenJDK: from a technical perspective, that wouldn't be the end of the world, especially since the Harmony implementation is rather inefficient.
Option two is that Google ports Hotspot to run on Android and, then has PackageManager convert dex bytecode back to Java bytecode on device. That would be awful, since ART is built for low-end devices and, well, Hotspot isn't.
In favor of option one is that Google is still developing ART. In favor of option two is Oracle being Satan incarnate.
I also wouldn't be surprised if Oracle has compelled Google to simply ship a copy of Hotspot, allowing developers to ship "authentic Java" APKs instead of dex-bytecode ones, with the two environments running in parallel, with two different zygotes.
> That would be awful, since ART is built for low-end devices and, well, Hotspot isn't.
That is plain Google FUD to not follow the Java standards.
There are lots of embedded devices, more constrained than Android phones, running commercial compliant JVMs like Atego, Jamaica, J9 among many others.
Also Sun/Oracle Hotspot implementations have existed since the J2ME and Embedded Java early days for devices with just a few hundred KBs, not a few hundred MB like Android.
I'm pretty sure those constrained devices are running bytecode interpreters that aren't nearly competitive with native code in terms of speed, with UIs that wouldn't cut it on even a low-end smartphone these days.
The code for these embedded devices is JITted and I know because i worked on them. It also included some AOT but not much. In the past one of our guys benchmarked against an older version of Dalvik and had us at about 5x with those VM's.
To be fair they are not hotspot and it does have startup time issues because unlike the embedded version of our JIT's it doesn't feature MVM. However, a lot of this is configurable and easily fixable. Hotspot is remarkably tuneable and if MVM is added (which is possible) could probably beat ART in startup time as well...
ART is pretty fast by now though and pretty well understood. I don't think Google would switch to hotspot and I don't think it will need that for compliance either. They might just reuse some libraries that can be common and that's it.
Every time an Android device installs its monthly security update, it spends an inordinate amount of time, with the screen on, "optimizing" every single app.
pjmlp: original poster above referred to Hotspot, not arbitrary JVMs. Clearly Android devices can run VMs with both JIT and AOT (they do, after all), what OP questioned was whether Hotspot in particular (not some arbitrary VM) is a good VM on a mobile device. Given Hotspot's somewhat underwhelming startup speed even on desktop class computers, I think that's a very reasonable technical question to ask, and not FUD in any way.
The document you link to is not about Hotspot, it's about Oracle's Java ME VM, which is a different product from all I can tell. It doesn't mention the term "Hotspot" at all.
As a sidenote, giant commits including giant dependencies in the tree are the perfect time to include new backdoors. Nobody will know who really introduced them.
Wish people didn't do that. Separate repos are not hard to create...
A lot of stuff have been fixed or improved from the original Harmony code over the years, performance-wise the android implementation of the core library and the openjdk implementation are now similar (openjdk has a few more intrinsics and makes a more liberal use of native methods).
You're confused on several points. First, Java VMs don't "generate bytecode": if they have JITs, they generate machine code. Otherwise, they just interpret bytecode. Second, you're confusing OpenJDK-the-set-of-libraries with Hotspot-the-VM. OpenJDK probably isn't a disaster for performance: not because it's good, but because Harmony was abysmal. (Count the allocations inside String.format.)
Switching from ART to Hotspot is not a clear win. ART does AOT compilation (at least some of the time), is integrated with the system runtime (doing a compacting GC pass on app switch, for example), and interacts properly with Android's zygote-based start scheme. (It doesn't, for example, COW away all the memory benefits as a naive fork would.)
Both ART and Hotspot have pretty good code generators and allocators. There's no reason to support that the latter would be a better choice, technically, than the former. Google seems to agree, since ART development is ongoing.
Please read more about VM implementation schemes instead of continuing to make unfounded assertions (like "Java is based on the idea of a JIT" and "virtual method calls are slow without a JIT").
Option two wasn't even worth mentioning. There is absolutely no way Google is going to abandon ART and switch to a significantly less performant VM. AOT is here to stay.
A JIT makes many tradeoffs but it is always capable of producing code at least as good as an AOT. How much better that code is depends on many factors, such as the language, the application, and how much time/energy you're willing to spend on optimization (the latter might lead to choosing to generate code that's less optimized than an AOT).
A slightly bigger difference is not between JIT and AOT, but whether you can dynamically load code at runtime or not. If not, that opens the door to some whole-program optimizations, or, at least, removes the need for guards, generated by the JIT, that are triggered when new code is loaded. In any case, mobile applications don't load code dynamically.
Of course a JIT is capable of producing code as good as an AOT and perhaps even better since it can capture more profiling data. The problem with a JIT is the startup time and this has not gone unnoticed by Oracle as even they've started working on AOT.
Not startup but warmup (i.e. the time until the application is fully optimized). There are other problems with a JIT on small devices, such as increased memory and energy consumption (each may be significant or not, depending on how the JIT works). I am a big fan of JITs, but as with everything in software, it is a tradeoff.
> the latter might lead to choosing to generate code that's less optimized than an AOT
With ART compilation happening on the device (during installation instead of runtime, but still) it suffers of the same trade-off of compilation performance vs. performance of the generated code.
> In any case, mobile applications don't load code dynamically.
Android has a DexClassLoader. I'm sure some people use it.
> it suffers of the same trade-off of compilation performance vs. performance of the generated code
Sure, which is why HotSpot may be better.
> Android has a DexClassLoader. I'm sure some people use it.
I didn't know that (not an Android dev), but that only means that some form of JITting may be beneficial anyway (depending on how popular this feature is).
As a guy who used to work for Sun on JITted code and is now doing AOT for https://www.codenameone.com/ I've got to say that JIT always beats AOT in runtime. It can also beat it in startup when properly designed (MVM, caching etc.).
Generalizations about which technique beats the other technique reflect a lack of technical maturity. Both have advantages, and you're doing a disservice by advocating the use of one or the other exclusively ignoring differences in environment, circumstances, and workload.
The very fact that Oracle has started to address the slow startup times of JVM applications by finally working on AOT compilation is an admission of the technical immaturity of the JVM. Also, the only relevant environment here is mobile and startup times are paramount in this environment.
> The very fact that Oracle has started to address the slow startup times of JVM applications by finally working on AOT compilation is an admission of the technical immaturity of the JVM.
Not at all. If you listen to the talk introducing that work, you'll see that it is designed to address a very particular (and relatively unusual) use-case, which is important to some specific (yet lucrative) Oracle customers.
As far as Android is concerned - AOT is here to stay. And yes, I'm aware ART has a JIT and it's probably used for devices that cannot handle the overhead of AOT, but I imagine it's seldomly used considering the specs of today's phones and the requirements of Google's CTS.
To explain if you are just joining in: This pretty much means Oracle v Google, a case with major ramifications for the industry has been settled out of court. I don't see how this can be interpreted any other way.
This is a git repo, so "authored" tries to map to the person who wrote the content, and the "idea" of the commit, if there is such a thing. It's set when the commit is initially created, and for the most part not changed unless explicitly requested. The commit date is the actual time the commit object was made; this can differ if the commit is amended, rebased, cherry-picked, the result of a squash/fixup, etc. If a commit sits on a branch for months, then gets rebased against master, code-reviewed, and merged, those two dates will be significantly different.
This usually works as you might expect. If your coworker Joe authors a commit on a branch, and you cherry-pick it onto another branch, that cherry-picked commit is authored by Joe (and the time reflects when Joe authored it) and commited by you (at the time you performed the cherry-pick).
You have gone a little further and provided some general reasons as to "why this might happen", but you still seem to have missed the essence of the question, which was almost certainly more "can someone come up with some guess why, in this particular circumstance, given the theories about this commit being influenced by a settlement of the Oracle lawsuit, this commit was authored in February but only landed in November?".
This means that a person named Piotr Jastrzebski created a commit in February. This is a different commit based on that one (not in the sense of revision ancestry, but in the sense of rewrite history): this commit is the N-th rewrite of Piotr's Jasterbski's commit, for some N > 0, and the rewriter ("committer") is Narayan Kamath.
The rewrite can have different content: for instance, it can be altered to merge against a different parent.
An example of a rewrite is a simple cherry-pick (e.g. from one branch to another).
All these things are commits. The initial authoring is a commit, and this latest rewrite by a different person is a commit.
You have answered how it is possible, not why it happened: that is like someone pointing out that this "mysterious" commit is actually quite easy to explain, as a commit is an object in a git database, and we have all the metadata to see who made this one: I am extremely well aware of how it works (I give lectures at UCSB about git where I start by explaining the internal file formats and work my way up to the various command line tools), but I still found the question here of "why was this authored in February but only landed in November" fascinating, given the speculation about Oracle.
Or that Google has simply decided to use OpenJDK and abide by its license, an option that has always been open to them (yet they have so far rejected).
This is an interesting comment. GPL licensed code is divisible, so Google could technically achieve the same thing by adding a comment in the NOTICE file that said Api method signatures are Copyright Oracle, GPL, the rest is Apache. But the NOTICE file doesn't say anything about Oracle at all! Nor did they carry over the openjdk LICENSE file, which adds to the mysteriousness.
I only did a cursory examination of the commit. Someone please post if I'm wrong about this.
> GPL licensed code is divisible, so Google could technically achieve the same thing by adding a comment in the NOTICE file that said Api method signatures are Copyright Oracle, GPL, the rest is Apache.
That doesn't work. GPL is divisible (in fact, you can do whatever you want with the code), but the license must be applied to the entire deliverable, up to (excluding) classpath-linking which is explicitly exempted by the OpenJDK license -- so in this case, the whole runtime minus Google-only packages if they're classpath-linked only.
My thought was that Google may simply choose to make Android GPL + classpath exception. The reason for not doing so when Android was young (I can only assume) was the fear that phone vendors would balk at a runtime that doesn't let them make proprietary changes (of course, Linux doesn't either, but I guess the thought was that the phone manufacturers are likely to make changes that are closer to the application). But now, given Android's popularity, phone vendors would swallow whatever license Google imposes on them (and would still be free to make proprietary changes to classpath-linked portions of the runtime).
This is a win for everyone: Google gets to expend less effort maintaining the runtime, plus they get Oracle off their back (at least for future Android versions); Oracle gets to have Java (or something close enough to it) on lots of smartphones, and the developer community gets to have true Java interop (with all new Java features), and probably a higher-quality runtime.
Later, Google would be free (but not compelled, although that depends on a future settlement) to make Android fully Java compliant with one of the Java standards, but that is an orthogonal issue.
Hmm, this may be orthogonal, but it certainly can't hurt. Quasar already has AOT instrumentation and we're working on making it available on Android as-is. OTOH, even if Android goes OpenJDK, it's doubtful they'll support agents, so you'll have to rely on AOT instrumentation anyway.
But to get us back on point, it will certainly make the work of migrating any Java library to Android easier.
Thanks for ruining my holiday season. What's the precedent situation if they do settle it? Is Oracle's latest victory binding even if the final resolution of the case isn't decided by the court?
The federal circuit's opinion concerning the copyrightability of APIs is only binding precedent on the federal circuit itself and only when interpreting 9th circuit law. The FC's usual jurisdiction is over patents, not copyrights, and they only got involved in this case because Oracle originally asserted some patents that failed to find traction. In particular, the real 9th circuit would be free to draw their own conclusions should they hear a future case concerning the copyrightability of interfaces.
If this inference is right, I hope this means that Google will enable the web community to use the JVM with or as an alternative to WebAssembly. Hotspot is absolutely incredible technology, and it's competitors are still many years away from coming close to matching it's capabilities.
And throw out the hard work __ALL__ browser vendors are doing TOGETHER on WebAssembly? In favor of a VM that clearly has restrictions and is owned by a litigious organization? Bloating browsers by adding another VM, which will hurt throughput by having multiple GC's to synchronize?
The JVM ecosystem and Hotspot are certainly modern marvels, but there's hurdles greater than the potential benefit.
"a VM that clearly has restrictions and is owned by a litigious organization"
Indeed - if there's one thing that this case has shown it's that java is poison legally. If a company the size & power of Google is being forced to take a project the size & importance of Android places they really don't want to technically, well, that's a very big deal. And I wouldn't like to think what would happen to any smaller group that wanted to take a java technology down a path that isn't Oracle-approved.
No. Variety breeds better products and WebAssembly is a pretty different thing from the JDK. You might as well say that web assembly competes with LLVM.
I don't see anyone adding Java back into the browsers, that ship probably sailed.
I also have a huge problem with the litigious nature of Oracle... However, this would mean that something was settled and might reduce the likelyhood of future lawsuits especially in regards to patents or copyright. WebAssembly didn't go thru that process which might mean that if it picks up some anonymous patent holder might start attacking it. It is vendor neutral though, which is both a plus and a legal liability as there is no single "responsible" entity.
The JVM is already open source. If this matter has indeed been settled, then the litigious organization issue might have been too. WebAssembly doesn't have a GC, and probably won't for some time. It also doesn't have ~20 years of performance tuning on different microarchitectures the way the JVM does. Benefits include getting scala, clojure, jython, jruby, and about 30 other languages working client side in the browser for free. Not to mention the languages that have been statically compiled will run way faster than the equivalent JS. That would be great for mobile, and the spread of the open web.
I can picture Google's lawyers using that in court against Oracle.
> then the litigious organization issue might have been too
That might is a big MIGHT, the same kind that keeps browser vendors from shipping various media codecs and news companies from using various media streaming protocols.
> WebAssembly doesn't have a GC
The GC all browsers currently have and will not get rid of that I was referring to was the JavaScript VM.
> It also doesn't have ~20 years of performance tuning on different microarchitectures the way the JVM does.
And the JVM doesn't have the start up performance of any JS VM. I would estimate the number of exploits to be significantly higher for the JVM, but worth less.
> Benefits include getting scala, clojure, jython, jruby, and about 30 other languages working client side in the browser for free.
JS is the most compiled to language in existence. Also, my point has been none of this would be free. How do you integrate JVM with DOM? Who's going to rewrite all that? Who's GC cleans up around here?
> Not to mention the languages that have been statically compiled will run way faster than the equivalent JS.
Are you talking about WebAssembly here? Cause it sounds like you are.
I don't think this is a significant issue because it's a one time cost. It could be done at browser startup. Even for a cold start, Hotspot and V8 are comparable.
$ time java com.Hello
real 0m0.096s
user 0m0.103s
sys 0m0.013s
$ time node /tmp/hello.js
real 0m0.073s
user 0m0.067s
sys 0m0.007s
I actually worked for Sun/Oracle on VM's but on the embedded not hotspot team. The trick is actually simpler and its called MVM, which is something Sun kept avoiding on the desktop/server for some stupid reason but we did do it on mobile and it made startup almost instant.
The trick is to share one VM instance between multiple apps. So when the OS starts you start the VM process and then fork with a lot of the JITted code already in place so you get almost instant startup and reasonable process isolation.
In mobile where there are some restrictions this is very practical. On the desktop/server this gets a bit tricky with bytecode manipulation, classloaders etc. But this is totally doable.
My personal uninformed theory is that Sun or Oracle didn't do this because they didn't care. MVM has two use cases: faster startup/lower overhead on desktops (they don't care about that).
Server efficiency in small scale deployments (which they don't care about either). AFAIK Google did some work on MVM for App Engine Java, but those are just rumors.
> The trick is to share one VM instance between multiple apps. So when the OS starts you start the VM process and then fork with a lot of the JITted code already in place so you get almost instant startup and reasonable process isolation.
Android does this, unless it changed recently. The common parent process is called "Zygote".
MVM relies on the runtime system to enforce security isolation. In a system like Android that allows unrestricted loading of native code, this scheme can't work, since there's no way to get arbitrary native code to play along with the runtime security model.
Personally, I feel much more confident with the kernel enforcing application isolation than I would feel about relying on the Java security model.
As a sibling comment pointed out, Android already does this. To learn (an insane amount of) more detail about how this works, read this article I wrote a while back on how Process Loading on various systems is optimized.
It's not the same thing at all. Android first makes a process template, then takes clippings for each process it wants to run. That's not the same as running different applications in the same process.
The person who worked for Sun/Oracle, the one who first mentioned this technique earlier in this thread, was quite clear that the technique involves loading a single VM and then "fork[ing] with a lot of the JITted code already in place and reasonable process isolation". You are, of course, correct that this is not the same as "running different applications in the same process", but that is not the technique that was described.
> The trick is to share one VM instance between multiple apps. So when the OS starts you start the VM process and then fork with a lot of the JITted code already in place so you get almost instant startup and reasonable process isolation.
> Remember that Web Assembly is designed to run in a JS engine.
Is it? It's a very low-level bytecode that's designed to be efficiently compiled in one pass, and completely unrelated to the JS engine. It isn't meant (at least at this time) to have an optimizing JIT (like HotSpot or V8). It is meant as a good target for languages that don't rely on/can't benefit greatly from good JIT optimizations, such as C/C++ (or Rust).
Of course, one could compile a JVM to wasm, provided that wasm allows applications to write to executable memory (which I doubt).
It is (though that's not the only thing it's designed for), but in a JS engine setting it only uses a few specific parts of the JS engine, such as the JIT backend to do the actual code generation work and interoperability with JS.
Indeed, wasm won't likely ever expose directly executable memory. But in the future it may expose JIT capabilities like creating and calling new functions, declaring "patchable" code fragments which can be manipulated through APIs, and so on, and with these capabilities it isn't unreasonable to think about porting JITing VMs onto wasm.
All code can be a patent minefield. I predict we'll see patent trolls going after browser makers as soon as WebAssembly becomes popular. The fact that Java has a single, notoriously litigious owner actually works in its favour here - anyone trying to patent-troll Java will be issuing an invitation to Oracle's attack dogs, whereas WebAssembly could easily suffer a tragedy of the commons where no one browser vendor wants to spend the money to defend it.
OpenJDK has a full patent grant[1]. It is 100% open source in every way possible, its use is unrestricted, and it is released under the same license as Linux. AFAIK, Oracle has never sued anyone making use of it for whatever purpose whatsoever. It is important to remind the undisputed fact that -- at least so far -- Google has chosen not to use OpenJDK (or, in any case, they did not comply with its license), and therefore its open source status is irrelevant to the court case.
[1]: That grant is automatic, by the open source license. In addition (and unrelated to OpenJDK), there is an explicit patent grant to conformant implementations of Java (be they based on OpenJDK or not).
Of course, if you don't conform, Oracle will sue you into oblivion. Java is free software as long as you use it in exactly the way Oracle wants, right? It's for this reason that Java is poison to any of my projects. I'll choose Node-fucking-JS over Java, because nobody is going to sue me for using JavaScript the way I want.
> Of course, if you don't conform, Oracle will sue you into oblivion.
What? OpenJDK is as free as they come. No conformance with Java necessary, you can do with it whatever you damn well please. You can use it to implement .NET if you want. Oracle has never (to the best of my knowledge) sued anyone for the use (or modification) of OpenJDK.
Okay, so can I modify it so that it doesn't conform to the Java specification? Can I modify it bit-by-bit until it's bytewise identical to Harmony? Of course I can't. Because if I do, Oracle will sue me.
Oracle is lying. They claim to be offering free software, but will sue you once you take advantage of that freedom. It's a fucking trap.
Do you really expect the technology community to embrace Oracle technology after Oracle spits in their food and shits in their sink?
> Okay, so can I modify it so that it doesn't conform to the Java specification? Can I modify it bit-by-bit until it's bytewise identical to Harmony?
Absolutely (on both counts, although if it is identical to Harmony you may be in violation of the license due to a collision between GPLv2 and ASL, as ASL imposes further restrictions which GPLv2 does not allow -- see the next paragraph).
> Because if I do, Oracle will sue me.
They will not. In fact, they explicitly allow you to do whatever you want with it. They are not placing any restrictions; their own license (same as Linux's) does not allow them to do so (section 6 says: "You may not impose any further restrictions on the recipients' exercise of the rights granted herein").
Oracle sued Google, not you. They didn't sue them for using Java, and they certainly didn't sue them over OpenJDK. Of course, it has been Google's PR department's strategy to make you think that the lawsuit may apply to you, but it doesn't. The circumstances leading to the lawsuit were very unusual.
> Oracle is lying.
Maybe about other things (I don't know a large company that doesn't lie), but not about this.
> They claim to be offering free software, but will sue you once you take advantage of that freedom. It's a fucking trap.
Again, to the best of my knowledge, that has never happened.
> Do you really expect the technology community to embrace Oracle technology after Oracle spits in their food and shits in their sink?
I don't expect anyone to do anything other than to understand the facts and then make their decisions[1]. Oracle has earned the distrust of many developers, but I don't find Google to be any more likable. Personally, I'll take Oracle's old-school greed over Google's sneaky espionage and manipulative PR, but that's just me.
----
[1]: I am willing to bet that very few people understand what the Oracle v. Google court case is all about, and the loudest voice was by far Google's PR.
> WebAssembly doesn't have a GC, and probably won't for some time.
Yeah... because it's assembly. That's the whole point. You don't WANT a GC. It would be the wrong layer to place a GC in. Ask most professional game developers if they would consider using a platform with a GC. They won't, because you can't build a realtime system without stuttering if you're dealing with a GC.
1000s of games have shipped with gc. Every Unreal and Unity game uses gc. Every XNA game used gc. Many of the largest most popular AAA games use gc. So yes we can ask professional game devs if they want a gc. Most will say "yes". Like any tool you know when to use them and when not to
Further, a lot of shared data structures in super high perf C++ end up with "Free Lists" to allow multiple threads to perform atomic delete operations on the data structure. Point being that memory is no longer freed deterministically.
"If there's other threads working here, atomic compare and swap this pointer to the free list, otherwise last one out clean up."
Take a look at some of the code from Chapter 7 from Anthony Williams "C++ Concurrency in Action" [0].
In most of those cases, the performance-sensitive game engine itself does not use GC; GC is used in the scripting language that implements the high-level game logic.
The only advantage I can think of to putting the JVM in a browser is if you intend to run java code. (God, I hope not). It's not like they're lighting the world on fire with their performance running dynamic languages. (I mean, it's fine, but it's nothing special)
Besides, we already saw what java-in-the-browser looks like, and it was a huge failure.
> It's not like they're lighting the world on fire with their performance running dynamic languages. (I mean, it's fine, but it's nothing special)
Yes, they pretty much are[1], and yes, it is[2].
Their 80 kLOC JS compiler is on par with V8, they're matching or beating PyPy when running Python, and their Ruby performance is out of this world. The downside, though, is that it has a long warmup time, which makes it unsuitable for web pages.
The Java plugin's mistake was to embed a rich sandboxing mechanism right in the middle of the language layer. The Java runtime, in the middle of parsing bytecode and registering class hierarchies, is supposed to enforce who can access what things, not simply as an advisory mechanism or safety check (as public/private is just about everywhere else), but as a security mechanism under active assault, with only this single line of defense between untrusted code and full local privileges just like native code. And, like anything with a complicated security policy and a wide attack surface, it had no chance.
Stick a regular, unprivileged JVM, with no secure classloader magic, inside a straightforward non-Java low-level sandbox like NaCl or even just PPAPI + Chrome's renderer sandboxing (like Pepper Flash or PDFium) and it'll probably hold up just fine.
And honestly that's what Android does. Java isn't a security boundary on Android, and the NDK makes this explicit. Each app runs as its own UID, and the kernel is taught to isolate users a bit more than usual for UNIX, and that holds up pretty well -- not perfect, but far better than the Java plugin does.
(To be clear, I'm not advocating the JVM as a platform for web content. Just that, if somehow it turns out that the JVM is in fact the right platform, the sandboxing problem not a blocker.)
In the context of the recent juniper attack where some unauthorized code was committed without anybody noticing for years, it seems like it would be easy to hide a backdoor in such a big commit.
How do you go about checking the integrity of the code when you have so many files?
8902 files were changed, most added, and the commit says it's just importing openJDK files. Is there anybody checking that the source file imported haven't been modified to include some kind of backdoor?
I like this thinking. Someone with a big export of code repos could do some automated diff/analysis on open-source code to find vendored imports that have been modified from upstream or is outdated to facilitate manual security audits.
The commit references ojluni. This relates to "luni" in Android source which stands for lang util net io. Sounds like there are plans to replace the harmony implementation with OpenJDK one. License differences are definitely interesting but could be also related to performance and completeness - luni is fairly small set of classes whereas ojluni import brings in a ton more.
But what I don't understand is why they're importing the full AWT API! That's nuts.
I took a closer look at the commits, and I think it's not as bad as it looked at first. They imported all classes of OpenJDK in the February commit posted here, but then removed things like Swing at some later point of time. It's not there in the master now.
I'm starting to wonder if Oracle got them to go JCK. Would be a technically crappy development that serves little actual purpose. Or maybe something's cooking and we don't know yet.
IIRC, Oracle's claim was that Google's use of the API wasn't fair use because they didn't implement all of it; they weren't trying to build a compatible product. So importing enough of Java to be fully compatible might be a result of the lawsuit, yeah.
Since Android's zygote does preloading of system classes and ART does precompilation to native code, I think having these many additional classes in the core is bound to have an adverse effect on app startup times and device memory consumption. I'm surprised they're doing this so sneakily.
Its exciting to see all the goodies from java sound finally coming in. I could care less about cobra though Im just glad we get better support for audio coding from java without having to resort to jni and the native layer in c/c++.
I recall seeing a "libopenjdkjni" related review on android-review earlier today. It was just a makefile change though, no real code diff. Unfortunately I can't find the review anymore.
Google has made a point of not using GPLed code (except Linux) or Sun/Oracle code in Android, for technical and legal reasons. There's even a lawsuit about it. So it's surprising to see GPLed Oracle OpenJDK code being committed anywhere near Android.