It is encouraging to see that heroku is actually working on rolling new things out -- for a bit it hasn't clear if it was just in frozen "squeezing all the juice out with as little investment as possible" mode forever.
I'm having trouble interpreting that blog post to understand what it might actually mean for me or if I'd want to use it or what advantages it would have, looking forward to learning more.
> Today, OCI images are the new cloud executables. By moving to OCI artifacts, all Fir apps will be using images with compatibility across different environments. This means you can build your application once, run it locally, and deploy it anywhere, without worrying about vendor lock-in or compatibility issues.
Is this something I can try out locally without signing up for heroku first?
Marketing speak aside, I'm curious what is actually changing for the end developer in the "next generation". Heroku already supports building and deploying containers, and that will presumably continue.
One big part that I'm personally excited about is support for Cloud Native Buildpacks. It's an open spec, part of the CNCF, and produces container images. You can use it to debug locally and can try it out now https://github.com/heroku/buildpacks/blob/main/docs/ruby/REA....
To go along with that we've built and maintain a Rust framework for writing CNBs https://github.com/heroku/libcnb.rs. I maintain the Ruby CNB and so I'm pretty excited to see some of my stuff in action.
> Heroku already supports building and deploying containers
Kinda. Heroku created a container ecosystem before OCI images were a thing. Apps deployed to the current Cedar infrastructure are deployed as "slugs" basically a tgz of the application directory that's loaded onto an LXC container, and unzipped to run on a base image (also called a stack) https://devcenter.heroku.com/articles/slug-compiler.
One big benefit of moving towards a standards compliant future instead of homebrew, is that customers also have access to that ecosystem. That's what enables things like being able to run and debug application images locally. It's the standards and community. We went fast and blazed some trails, now we're aiming to "go far," together with the community.
I am pleased to see support for OpenTelemetry on the way. As a heavy user of AWS Lambda, the observability provided by X-Ray is invaluable for troubleshooting and improving performance.
We've been on Heroku Enterprise for 8 years now. I read your comment and clicked on the link with so much enthusiasm.
Duh. You guys have completely forgotten who your audience is. Your audience is _application developers_. I have no idea what all that mambo jambo means in that article _and thats why I pay Heroku_.
I'm on Heroku because I don't want to know about cloud native and fir and open telemtry are. You tell me I should get excited on Heroku? How about improving your autoscaling options so the narrow response-time-based scaling is not the only option?
All that article tells me is that you guys are maybe improving your underlying infrastructure. Meh. Good for you. From a customer (AKA Application Developer) perspective nothing has changed.
The blog post is one of three published recently. It's from Terence Lee, an architect and former maintainer of Bundler (Ruby package manager). He co-founded the Cloud Native Buildpack project and was part of open sourcing the original Ruby buildpack. He gets pretty into the weeds with the tech.
The other article that's not been linked yet is this one https://blog.heroku.com/next-generation-heroku-platform. It's still not exactly what you're asking for "give me the application features" but it is a little less lingo heavy.
One thing kinda hidden in there:
> and AWS Graviton into the platform.
That means ARM support. Already the Heroku supported buildpacks work with both AMD (x86) and ARM (aarch64). Think mac Intel versus M(1/2/3/4) chips.
> From a customer (AKA Application Developer) perspective
I posted another comment earlier. Local debugging with CNBs is pretty neat. But I also agree, I see this still as an "investment" phase. This is the start of the work, that gets us more fun stuff later.
> How about improving your autoscaling options so the narrow response-time-based scaling is not the only option?
This is not my team, so I'm speaking not from first-hand experience. It's my understanding that Kubernetes has a pretty rich ecosystem for autoscaling. On our current platform, if someone wants to try out an autoscaler, it's a bespoke solution and difficult to try in the wild. Moving to a more standards based and community backed infrastructure means what's easy to do on Kubernetes, should also be easy for us to do on Heroku on the new platform.
I hear that you care about autoscaling options and I'll pass that along. Anything specific in that area?
I guess I was assuming it was computed, not manually entered, therefore it's the most trivial of bugs, and was just trying to have some fun with a bug report
Even if it was computed, the only reasonable interpretation of a date given without a timezone (either explicitly or implicitly from the location) is that it's in UTC.
But can I write plugins for it? My understanding it is only implements a subset of the common plugins (and does not do any of the linting that pylint is useful for), so it avoids scanning the filesystem for plugins?
I usually think of coding and programming as fairly interchangeable words (vs “developing”, which I think encapsulates both the design/thinking and typing/coding aspects of the process better)
That was the point of the lawsuit. But also seems like the law, museum, and lots of people here missed the point too.
The guy was paid less than $6K for his work. He was not handed a "giant pile of cash" either, but he will now have to pay back the cash that the museum (for whatever reason) lent him, along with the legal costs amounting to ~$11K.