You can either use the built in extractors for standard file formats (CSV, JSON, XML, XLS, ...), use official add-ons to connect to usual external services (SQL), or write your own extractors.
Once you generated a data stream, you can apply your changes using simple python.
Wrangling data is great, but keeping it around is better. Use our standard writers (CSV, JSON, XML, XLS, ...) or connect to your custom services.
Tired of learning new APIs? We promise you'll be up and running in 10 minutes, if you know some python.
One focus of software development is to create small tools, with a specific purpose (think UNIX ...), that can be chained. It enables easy unit testing, and improves a lot system maintainability.
It's just python ! We worked hard to provide the most transparent possible API, using the most standard data structures you already know. And you can benefit from all the ecosystem library to connect to basically anything.
Extracting data is the first step of any regular transformation.
Any source python can talk to is supported, as Bonobo is regular python. Use SQLAlchemy, OAuth based services, APIs, Open-Data, LDAP, etc. Anything, really.
Transform the data using standard operations. More complex operations (like joins/products, sort, etc.) are supported by builtin configurable transformation classes.
Save your data. In a database, in a file, on a remote service, etc. Once again, regular python means you can use anything available out there, and as you know, python cheese shop is pretty complete.
Run the thing. A lot of different execution strategies are supported, but the reasonable defaults should be sufficient, for now. Either execute it using the python interpreter (python mygraph.py), the provided CLI (bonobo run) or within a docker container (using bonobo runc)