T-ORCH: Orchard
I’ve decided to evaluate crow and pistache for the REST API in T-ORCH called Orchard.
Django Rest Framework seemed to provide the necessary features but I felt the user management needed to be out of band making most of those features unnecessarily complex for what needs done. I also have long expressed a community need for C/C++ interfacing with AMQP/RabbitMQ to prevent Java markets from dominating this area of architecture. In addition to this I found that the Pika interface for AMQP in my last DRF project to be very very slow when interfacing with an MQ to the point of detriment to any project using it.
I may be breaking Orchard up into microservices though, i.e. have one dedicated microservice for user management and another for what the users actually do in orchard. I haven’t decided yet. It will be determined entirely on how well I can abstract the API library interface with whatever I end up going with. Crow so far seems to be exactly what I need, Pistache is promising but somewhat less integrable.
T-ORCH SPEC
Authorship and Legal
T-ORCH is intended to be a scalable service orchestration layer for Service Oriented Architecture. It is wholly designed and implemented by Chris Punches in 2018. It is owned by this project’s parent group, SILO GROUP, LTD. and Chris Punches and is released under the Attribution-NonCommercial-NoDerivs license. You may not create or implement designs based on it without my explicit permission in each case.
Abstract
T-ORCH allows you to centrally control and administrate, and orchestrate, services and microservices across environments or within the same environment. It is intended for complex application layer orchestration as a command and control system triggering actions within the services that are aware of it.
The diagram below outlines component interoperation at an abstract level:
Components
- The controller, which will be called
bonebox
. - The API, which will be called
orchard
. - The MQ, which will always be just
MQ
. - The consumer, which will be called
harvester
. - The database, which will always be just
DB
. - The
service
, which is what this whole thing revolves around and isn’t part of the solution really, but needs to be aware of how it should interact, presumably through shared library calls. For simplicity we’ll just call this service but it’s technically a client to orchard.
Bonebox Controller
The bonebox controller registers a request or cancels a request. It also can check on the state of requests. It is the point at which bonebox GUIDs are registered. It is able to obtain all available service GUIDs and request GUIDs.
Orchard API
The Orchard RESTful API reports on state for requests, creates, updates, reports, and deletes requests from the controller, which it relays to the MQ for buffering. It polls request state directly from the DB, provides a way for services to check for new requests, as well as sending request state updates from the service to the MQ. It is the point at which request GUIDs and bonebox controller GUIDs are created. It is the point at which GUIDs are paired to each other. It also authenticates users.
Service
The service, besides performing it’s normal function in the environment, gets new requests once they’ve been registered, it also acknowledges requests, and also marks requests as complete or failed. The service is associated by Bonebox but self-registered via Orchard to create a service GUID consumable by Bonebox for request associations.
MQ
The MQ receives all request creations and updates (state changes), including request cancellations and deletions from the Orchard API.
Harvester Consumer
The harvester consumer relays requests from the MQ and inserts them into a database.
Database
The database receives creation, update, or deletion requests from the consumer only. It also provides the table used to report on request state to the Orchard API.
Component Distribution
- There can be multiple instances of bonebox.
- There can be multiple instances of orchard.
- There can be multiple services being controlled.
- There can not be multiple instances of MQ for each orchard and harvester.
- There can not be multiple instances of harvester for each MQ, orchard and DB.
- There can not be multiple instances of DB for each orchard, harvester, MQ.
Access Control
Authentication is important to prevent the entire system from being hijacked by information obtained in adversary interception.
Orchard Users
- An orchard
super user
that can create or destroybonebox users
andservice users
by bonebox via orchard. - A
bonebox user
that is able to register and deregisterbonebox GUIDs
associated with that user and associate registeredservice GUIDs
with abonebox GUID
via orchard. Each of these bonebox GUIDs represents an instance of bonebox. A bonebox user can not read or change any item associated with another bonebox user. The bonebox user is used to authenticate calls made to orchard. These users can only be created by the super user. - A
generic service user
that services use to authenticate when registering as an available service and updating request statuses. These users can only be created by a bonebox user.
Services
Services do use authentication but this user can be shared with third parties and this is by design. This allows 1st party orchestration in 3rd party environments. It also allows lockouts when a service account is compromised as well as layered access control in mixed vendor environments.
Services are able to register themselves as available as a controllable service via obtaining a GUID from orchard. Once a service is registered, its service GUID must be associated with a bonebox GUID by the bonebox instance before requests for it can be created.
Once associated, a service should poll the orchard API periodically for requests associated with it in REG status. Once found it should acknowledge the request by setting it to ACK status and can provide a note in its response that will ultimately be available for that status payload to bonebox.
During the processing of the request, the service can poll for cancellations to that request via the orchard API and halt accordingly.
Once the request either completes or fails, the service updates the request status to a terminal status and moves on.
Boneboxen
A bonebox instance associates a registered service with itself or deletes registered services. It is then able to issue requests for that service to complete.
It is not able to issue requests for a service unless the service has been associated with that bonebox.
It is able to create/destroy bonebox GUIDs, bonebox users, and service/bonebox associations, as well as requests. It can also update any request not already in a terminal status to the terminal CCL status.
It is able to view all requests associated with a service guid.
It is able to view all services associated with a bonebox guid.
It is able to create/read/update/destroy service users.
It is able to create/read/update/destroy a bonebox user via the super user credentials.
Requests
States
There are five states for a request:
- (REG) Registered
- (ACK) Acknowledged
- (CPT) Completed
- (ERR) Failed
- (CCL) Cancelled
Lifecycle
These states also represent the request lifecycle:
REG -> [ ACK ] -> ( CPT | ERR | CCL )
Data Fields
A request is ultimately an object in transport and a table row at rest. A rigid object structure is used uniformly across all components.
A JSON representation of the fields a Request object must have is below. Items marked in bold are state-specific, meaning that they are only present when updating to their associated state or when reporting to bonebox. All items are displayed when reporting to bonebox.
{ "bonebox guid": "9001-9001-9001-9001", "service guid": "9002-9002-9002-9002", "request guid": "9003-9003-9003-9003", "state": "REG|ACK|CPT|ERR|CCL", "payloads": { 'REG': { 'timestamp': '@timestamp', 'payload': 'dump_to_disk_mode 1' }, 'ACK': { 'timestamp': '@timestamp', 'payload': 'preparing to dump to disk' }, 'CPT': { 'timestamp': '@timestamp', 'payload': 'now logging to disk' }, 'ERR': { 'timestamp': '@timestamp', 'payload': 'failed to transition mode: I/O error' }, 'CCL': { 'timestamp': '@timestamp', 'payload': 'no longer needed, cancel if possible' } } }
For instance, request guid
is not present in a “REG” request body from bonebox because it does not exist until this is created by Orchard. Neither would the ACK, CPT, ERR, or CCL payload entries. These entries would, however, all be present when bonebox fetches states for requests. The whole object is returned.
Actor/Request Interoperation
The Orchard API is the arbiter of how components are allowed to behave with each other and with the state of requests. The logic around access control will be based on the state the request is in, the source of the action being taken, and the target of the action.
The format is as follows: The top level hiearchy represents a CRUD structure (Create, Read, Update, Delete). Each top level hierarchy is divided into “All” or “Single” to categorize movements against all request targets in a CRUD category.
Entries are of the format:
<actor>
by [(<field>
[and])] [via<actor mechanism>
] for <request criteria> [ –<scope clarification> ]
Registered (REG)
- Can Create:
- Single:
bonebox
bybonebox guid
andservice guid
viaorchard
for a single request associated with a single service
- All:
- None. Requests are registered one at a time, and are one per service.
- Single:
- Can Read:
- Single:
service
byrequest guid
viaorchard
bonebox
bybonebox guid
andrequest guid
viaorchard
for a single request associated with a service associated with that bonebox
- All:
bonebox
bybonebox guid
[andservice guid
] viaorchard
for all requests associated with that [service and] bonebox
- Single:
- Can Update:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for a request associated with that bonebox — can only update to CCLservice
byrequest guid
andservice guid
viaorchard
for a request associated with that service — can only update to ACK
- All:
bonebox
bybonebox guid
[andservice guid
] viaorchard
for all requests [to a single service] associated with that bonebox — can only update to CCL
- Single:
- Can Delete:
- Single:
- None. This is not a terminal state.
- All:
- None. This is not a terminal state.
- Single:
Acknowledged (ACK)
- Can Create:
- Single:
- None. This is not an entry state.
- All:
- None. This is not an entry state.
- Single:
- Can Read:
- Single:
bonebox
byrequest guid
viaorchard
for a request associated with that bonebox
- All:
bonebox
by [service guid
and]bonebox guid
viaorchard
for all requests [to a service] associated with that bonebox
- Single:
- Can Update:
- Single:
service
byrequest guid
andservice guid
viaorchard
for a request associated with that service — can only update to CPT or ERR.bonebox
byrequest guid
andbonebox guid
viaorchard
for a request associated with that bonebox — can only update to CCL.
- All:
- None. Requests are processed one at a time.
- Single:
- Can Delete:
- Single:
- None. Only requests in a terminal state can be deleted.
- All:
- None. Requests are processed one at a time and can only be deleted in a terminal state.
- Single:
Completed (CPT)
- Can Create:
- Single:
- None. This is not an entry state.
- All:
- None. This is not an entry state.
- Single:
- Can Read:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for a request associated with that bonebox
- All:
bonebox
by [service guid
and]bonebox guid
viaorchard
for requests associated with that [service and] bonebox.
- Single:
- Can Update:
- Single:
- None. A completed request can only be deleted.
- All:
- None. A completed request can only be deleted.
- Single:
- Can Delete:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for requests associated with that bonebox.
- All:
bonebox
by [service guid
and]bonebox guid
viaorchard
for requests associated with that [service and] bonebox
- Single:
Cancelled (CCL)
- Can Create:
- Single:
- None. This is not an entry state.
- All:
- None. This is not an entry state.
- Single:
- Can Read:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for requests associated with that bonebox
- All:
bonebox
by [service guid
and]bonebox guid
viaorchard
for requests associated with that [service and] bonebox
- Single:
- Can Update:
- Single:
- None. This is a terminal state.
- All:
- None. This is a terminal state.
- Single:
- Can Delete:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for requests associated with that bonebox
- All:
bonebox
by [service guid
and]bonebox guid
for requests associated with that [service and] bonebox
- Single:
Failed (ERR)
- Can Create:
- Single:
- None. This is not an entry state.
- All:
- None. This is not an entry state.
- Single:
- Can Read:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for requests associated with that bonebox
- All:
bonebox
byrequest guid
[andservice guid
] viaorchard
for requests associated with that [service and] bonebox
- Single:
- Can Update:
- Single:
- None. This is a terminal state.
- All:
- None. This is a terminal state.
- Single:
- Can Delete:
- Single:
bonebox
byrequest guid
andbonebox guid
viaorchard
for requests associated with that bonebox
- All:
bonebox
byrequest guid
[andservice guid
] viaorchard
for requests associated with that [service and] bonebox
- Single:
Bad News and Worse News
I noticed none of the runners were feeding tonight.
I checked, and apparently all of them were klined shortly after turning the feeds back on.
They’re still finding the runners. I’ll need to bring in new tools to obfuscate the other points in their field of vision. The rest is more work than it sounds but it’s worth it in the long run.
First is T-ORCH, then DEIGE, both of which will need to be open source for this to really work. Until then it’s not recommended to run a runner unless you like being klined. If you’re willing to risk it, I’ve left synapse running. I’ll also be creating an “easy deploy” script that will deploy tenta to a remote host for you. Still working out how to do that securely. I’ll also need to integrate Leptin into Synapse and fix a bug in Nerve that kills its data file as it’s happened twice now and lost a decent amount of logs when it happens.
Once T-ORCH is ready I’ll need to go back and rewrite most of the IRCTHULU services to actually use it. This should help me harden the payload model for it since that’s not currently spec’d yet. After all that’s done I’ll want Tenta to be able to plug into and use DEIGE. I’ll also want Synapse to move from Synapse mode to Leptin mode via T-ORCH and vice versa and I’ll want Tenta to handle various commands from there.
Oh, the worse news. That typo that lost 80,000 records — it was more like 200,000 as there were two negative events during the restructure..
On the bright side, since the new features automate the last few pieces, they’ll eventually wear out without developing new software. They’ve got some smart people but I’m relatively confident that the only way they’re finding the runners still is that they’re k-lining random users on hunches and I’ve already seen some evidence of that.
Recap
- Spec and design the new big-boy toy, T-ORCH.
- Implement T-ORCH.
- Integrate all services into T-ORCH (including Tenta layer).
- Implement DEIGE. Will probably involve a couple of microservices. Am considering reimplementing HOWDI as a microservice component of DEIGE.
- Integrate Leptin into Synapse
- Fix data source processing bug in Nerve.
- Integrate Tenta into DEIGE, possibly via T-ORCH for automated identity cycling.
-C
Data Streams are Glorious – IRCTHULU is Back Online
So, around the new year I disabled the feeds during an operation to shut off staff eyeballs for one of the networks that’s been targeting the users running the IRCTHULU runners.
Only.
I didn’t. I built a new tool called Leptin, which, like synapse, pulls from the MQ, only it dumps to disk instead of the database. Leptin will eventually be integrated into Synapse.
This was to give the runners a break while I focused on some work stuff without having to worry about someone finding something in the logs to identify the runners again.
Unfortunately during the development of Leptin I had to drop about 80,000 messages over a pretty stupid typo so we lost a couple days of logs. I’ve got some safeguards in the code now that will prevent that from even being possible in the future.
As for Leptin, what’s especially cool about the design for this part is you can use the existing tool, nerve, to replay it back into the queue, and it’ll slurp it up.
Leptin itself is a bottleneck though right now. I should have written it to be asynchonous like synapse as Pika in python is very, very slow.
Pattern Diagram for T-ORCH
I’ve finally decided on a usable pattern for general service orchestration in SOA.
I’ll be using this for the T-ORCH set of updates mentioned previously. As well as probably in every other solution I ever create when I have a choice, to be honest.
Here’s a fabulous diagram made by a fabulous person:
You’ve got a controller, an API, and the service you want to control. Behind the API is an MQ, a consumer service and a database.
Here’s how it works:
Controller
The controller registers a request or cancels a request. It also can check on the state of a request.
API
The API does all kinds of stuff: It reports on state for the controller, it receives request registrations and request cancellations from the controller, which it relays to the MQ, it polls the state of a request from the DB, it sends new requests to the service being controlled, as well as sending request state updates from the service to the MQ.
SERVICE
The service, besides performing it’s normal function in the environment, gets new requests once they’ve been registered, it also acknowledges requests, and also marks requests as complete.
MQ
The MQ receives request registrations, request updates (state changes), including request cancellations from the API.
CONSUMER
The consumer relays same from the MQ and inserts them into the database. This greatly simplifies the interactions in the whole design.
DB
The database receives creation, update, or cancellation requests from the consumer only. It also provides the table used to report on request state to the API.
Request Lifecycle
The request lifecycle is: registration, acknowledgment, and completion.
Open for Adspace
Would you like to sell adspace in the presenta example UI?
You got it.
Send me an email at punches.chris@gmail.com
*All revenue obtained will fund the surro linux project.
Identity Research Dataset Boost
I’ve got early drafts in for a new tool that will collect larger identity research data.
It basically connects to a server and scans through everything and collects the user profiles before disconnecting. This should give us about a 60,000 user profile benefit.
Of course, I can’t run it. I can only develop it and wait for someone to use it.
Sneak Peak at new orch features.
Data Feeds Need Control and Reporting Signal Channels in SOA
I’ll expand on this later after I’ve slept but this is the high level design for the next update for tenta (besides some obvious gotchas pertaining to field of vision obfuscation).
This will allow me to control and report on tenta clients so I know what feeds are going where, and even control feed client state. It’ll also provide some great dashboarding capability.
Update 1:
During the buildout of a new component called “Leptin”, due to being tired and making the perfect typo, I had to drop about 8,000 messages in transport while I rewired the changes in place. And that’s why you use a pre-prod, and that’s what I get for trying to cut costs with cowboy maneuvers. More details to follow.
New Standalone: Distributed Endpoint Identity Generation Engine
Introducing DEIGE
I’ve got most pieces of this already built which I’ve been using for testing, but automated identity creation on IRC networks is super easy, even for highly restrictive environments.
They’re not going to change the IRC protocol, so the constraints of IRC protocol are givens.
Nickserv varies from network to network and depends on what services bots they use and configuration of them. So some components will need to be network-dependent.
Given input:
- host (meta, registered vs used)
- user
- ident
- password
This should be able to operate as a single command that connects and does everything.
There are some problems to solve there:
N, R and E are separate hosts.
In addition to that, R and E need to be random and disparate between iterations for a system like this to really work.
The two problems introduced there are:
- orchestration
- endpoint creation
My two major pain points in all things.
Endpoint creation for R is relatively easy. Endpoint creation for E is slightly more complicated as you need to be able to open ports on a host to do that, which requires root access. For dynamic endpoint creation you’d almost need to generate the OS image and spawn dynamically.
I have an idea that I want to test.