Middleware

I’ve decided to integrate some of the concepts from the Orchard project into Ircthulu as new middleware in lieu of waiting for all of this to come together.  So, in short, “this is not IRCThulu’s final form”.

But it’s a good one.

Introduces

  • Identity management and authorization microservices.
  • An input API to secure the message bus without complicating access provisioning.
  • A way for unvetted users to use tenta.
  • A control channel in the message bus.

IRCTHULU: Nuerogenesis In Progress

A few updates.

Ircthulu lives — In The Shadows

Despite popular belief, the logs are still being collected.  This is such a unique project because it’s not just the development of a new project, or the hosting of components for it — it’s also got public relations operations bundled up with its maintenance and it’s very complex.  It’s been exciting, interesting, and educational for me.

Currently in flat file storage mode, will stream to db when ready

The adaptable architecture involved has allowed me to dump to disk for extended periods of time as the feeds stream in to prevent feedback loops from being established by operators at various involved networks being logged.  When I’m confident all issues are addressed I’ll slurp the flat-file storage being currently used to the database and it’ll create a much larger dataset for my instance of Presenta.

Looking at a more robust backend

While I implement an orchestration system called Orchard that will be a kind of a game changer both for Ircthulu’s ability to integrate with external products and for the enterprise community in general, I’m considering moving over to IBM Integration Bus for the backend components of Ircthulu just due to how diverse it is depending on how my evals with it go.  If this works out from what I’ve come to understand about IIB’s capabilities so far it’ll create a robust intelligence pipeline framework but will reduce the deployability of the overall project.  I’m still looking at it.  It will likely depend on how much porting to Java I need to do as I generally try to stay out of the Java space when avoidable just due to the overwhelming marketshare Java has, leaving gaps in other runtimes.  IIB v10 has a developer license that’s free so I should be able to use it for a personal project.  May have to move this project over to ad revenue to offset licensing costs if they don’t and I end up going with IIB.

DIEGE may require Orchard as a dependency

As for DIEGE, it’s still being researched.  Identity generation operations on these networks have alot of exposable properties.  It may need to wait for Orchard’s completion to be able to handle the distributed endpoint nature of this project’s requirements.

Note: I’m not a graphic designer

I’m struggling a bit with my terrible wordpress themes across this project’s whole parent brand.  Might be some 3rd party changes in that area soon depending on what I find.

T-ORCH: Orchard

I’ve decided to evaluate crow and pistache for the REST API in T-ORCH called Orchard.

Django Rest Framework seemed to provide the necessary features but I felt the user management needed to be out of band making most of those features unnecessarily complex for what needs done.  I also have long expressed a community need for C/C++ interfacing with AMQP/RabbitMQ to prevent Java markets from dominating this area of architecture.  In addition to this I found that the Pika interface for AMQP in my last DRF project to be very very slow when interfacing with an MQ to the point of detriment to any project using it.

I may be breaking Orchard up into microservices though, i.e. have one dedicated microservice for user management and another for what the users actually do in orchard.  I haven’t decided yet.  It will be determined entirely on how well I can abstract the API library interface with whatever I end up going with.  Crow so far seems to be exactly what I need, Pistache is promising but somewhat less integrable.

T-ORCH SPEC

Abstract

T-ORCH allows you to centrally control and administrate, and orchestrate, services and microservices across environments or within the same environment.  It is intended for complex application layer orchestration as a command and control system triggering actions within the services that are aware of it.

The diagram below outlines component interoperation at an abstract level:

Components

  • The controller, which will be called bonebox.
  • The API, which will be called orchard.
  • The MQ, which will always be just MQ.
  • The consumer, which will be called harvester.
  • The database, which will always be just DB.
  • The service, which is what this whole thing revolves around and isn’t part of the solution really, but needs to be aware of how it should interact, presumably through shared library calls.  For simplicity we’ll just call this service but it’s technically a client to orchard.

Bonebox Controller

The bonebox controller registers a request or cancels a request.  It also can check on the state of requests.  It is the point at which bonebox GUIDs are registered.  It is able to obtain all available service GUIDs and request GUIDs.

Orchard API

The Orchard RESTful API reports on state for requests, creates, updates, reports, and deletes requests from the controller, which it relays to the MQ for buffering.  It polls request state directly from the DB,  provides a way for services to check for new requests, as well as sending request state updates from the service to the MQ.  It is the point at which request GUIDs and bonebox controller GUIDs are created.  It is the point at which GUIDs are paired to each other.  It also authenticates users.

Service

The service, besides performing it’s normal function in the environment, gets new requests once they’ve been registered, it also acknowledges requests, and also marks requests as complete or failed.  The service is associated by Bonebox but self-registered via Orchard to create a service GUID consumable by Bonebox for request associations.

MQ

The MQ receives all request creations and updates (state changes), including request cancellations and deletions from the Orchard API.

Harvester Consumer

The harvester consumer relays requests  from the MQ and inserts them into a database.

Database

The database receives creation, update, or deletion requests from the consumer only.  It also provides the table used to report on request state to the Orchard API.

Component Distribution

  • There can be multiple instances of bonebox.
  • There can be multiple instances of orchard.
  • There can be multiple services being controlled.
  • There can not be multiple instances of MQ for each orchard and harvester.
  • There can not be multiple instances of harvester for each MQ, orchard and DB.
  • There can not be multiple instances of DB for each orchard, harvester, MQ.

Access Control

Authentication is important to prevent the entire system from being hijacked by information obtained in adversary interception.

Orchard Users

  • An orchard super user that can create or destroy bonebox users and service users by bonebox via orchard.
  • A bonebox user that is able to register and deregister bonebox GUIDs associated with that user and associate registered service GUIDswith a bonebox GUID via orchard.  Each of these bonebox GUIDs represents an instance of bonebox.  A bonebox user can not read or change any item associated with another bonebox user.  The bonebox user is used to authenticate calls made to orchard.  These users can only be created by the super user.
  • A generic service user that services use to authenticate when registering as an available service and updating request statuses.  These users can only be created by a bonebox user.

Services

Services do use authentication but this user can be shared with third parties and this is by design.  This allows 1st party orchestration in 3rd party environments.  It also allows lockouts when a service account is compromised as well as layered access control in mixed vendor environments.

Services are able to register themselves as available as a controllable service via obtaining a GUID from orchard.  Once a service is registered, its service GUID must be associated with a bonebox GUID by the bonebox instance before requests for it can be created.

Once associated, a service should poll the orchard API periodically for requests associated with it in REG status.  Once found it should acknowledge the request by setting it to ACK status and can provide a note in its response that will ultimately be available for that status payload to bonebox.

During the processing of the request, the service can poll for cancellations to that request via the orchard API and halt accordingly.

Once the request either completes or fails, the service updates the request status to a terminal status and moves on.

Boneboxen

A bonebox instance associates a registered service with itself or deletes registered services.   It is then able to issue requests for that service to complete.

It is not able to issue requests for a service unless the service has been associated with that bonebox.

It is able to create/destroy bonebox GUIDs, bonebox users, and service/bonebox associations, as well as requests.  It can also update any request not already in a terminal status to the terminal CCL status.

It is able to view all requests associated with a service guid.

It is able to view all services associated with a bonebox guid.

It is able to create/read/update/destroy service users.

It is able to create/read/update/destroy a bonebox user via the super user credentials.

Requests

States

There are five states for a request:

  • (REG) Registered
  • (ACK) Acknowledged
  • (CPT) Completed
  • (ERR) Failed
  • (CCL) Cancelled

Lifecycle

These states also represent the request lifecycle:

REG -> [ ACK ] -> ( CPT | ERR | CCL )

Data Fields

A request is ultimately an object in transport and a table row at rest.  A rigid object structure is used uniformly across all components.

A JSON representation of the fields a Request object must have is below.  Items marked in bold are state-specific, meaning that they are only present when updating to their associated state or when reporting to bonebox.  All items are displayed when reporting to bonebox.

{
  "bonebox guid": "9001-9001-9001-9001",
  "service guid": "9002-9002-9002-9002",
  "request guid": "9003-9003-9003-9003",
  "state": "REG|ACK|CPT|ERR|CCL",
  "payloads": { 
    'REG': { 'timestamp': '@timestamp', 'payload': 'dump_to_disk_mode 1' },
    'ACK': { 'timestamp': '@timestamp', 'payload': 'preparing to dump to disk' },
    'CPT': { 'timestamp': '@timestamp', 'payload': 'now logging to disk' },
    'ERR': { 'timestamp': '@timestamp', 'payload': 'failed to transition mode: I/O error' },
    'CCL': { 'timestamp': '@timestamp', 'payload': 'no longer needed, cancel if possible' }
  }
}

For instance, request guid is not present in a “REG”  request body from bonebox because it does not exist until this is created by Orchard.  Neither would the ACK, CPT, ERR, or CCL payload entries.  These entries would, however, all be present when bonebox fetches states for requests.  The whole object is returned.

Actor/Request Interoperation

The Orchard API is the arbiter of how components are allowed to behave with each other and with the state of requests.  The logic around access control will be based on the state the request is in, the source of the action being taken, and the target of the action.

The format is as follows:  The top level hiearchy represents a CRUD structure (Create, Read, Update, Delete).  Each top level hierarchy is divided into “All” or “Single” to categorize movements against all request targets in a CRUD category.

Entries are of the format:

<actor> by [(<field> [and])] [via <actor mechanism>] for <request criteria> [ –<scope clarification> ]

Registered (REG)

  • Can Create:
    • Single:
      • bonebox by bonebox guid and service guid via orchard for a single request associated with a single service
    • All:
      • None.  Requests are registered one at a time, and are one per service.
  • Can Read:
    • Single:
      • service by  request guid via orchard
      • bonebox by bonebox guid and request guid via orchard for a single request associated with a service associated with that bonebox
    • All:
      • bonebox by bonebox guid [and service guid] via orchard for all requests associated with that [service and] bonebox
  • Can Update:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for a request associated with that bonebox — can only update to CCL
      • service by request guid and service guid via orchard for a request associated with that service — can only update to ACK
    • All:
      • bonebox by bonebox guid [and service guid] via orchard for all requests [to a single service] associated with that bonebox — can only update to CCL
  • Can Delete:
    • Single:
      • None.  This is not a terminal state.
    • All:
      • None.  This is not a terminal state.

Acknowledged (ACK)

  • Can Create:
    • Single:
      • None.  This is not an entry state.
    • All:
      • None.  This is not an entry state.
  • Can Read:
    • Single:
      • bonebox by request guid via orchard for a request associated with that bonebox
    • All:
      • bonebox by [service guid and] bonebox guid via orchard for all requests [to a service] associated with that bonebox
  • Can Update:
    • Single:
      • service by request guid and service guid via orchard for a request associated with that service — can only update to CPT or ERR.
      • bonebox by request guid and bonebox guid via orchard for a request associated with that bonebox — can only update to CCL.
    • All:
      • None.  Requests are processed one at a time.
  • Can Delete:
    • Single:
      • None.  Only requests in a terminal state can be deleted.
    • All:
      • None.  Requests are processed one at a time and can only be deleted in a terminal state.

Completed (CPT)

  • Can Create:
    • Single:
      • None.  This is not an entry state.
    • All:
      • None.  This is not an entry state.
  • Can Read:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for a request associated with that bonebox
    • All:
      • bonebox by [service guid and] bonebox guid via orchard for requests associated with that [service and] bonebox.
  • Can Update:
    • Single:
      • None.  A completed request can only be deleted.
    • All:
      • None.  A completed request can only be deleted.
  • Can Delete:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for requests associated with that bonebox.
    • All:
      • bonebox by [service guid and] bonebox guid via orchard for requests associated with that [service and] bonebox

Cancelled (CCL)

  • Can Create:
    • Single:
      • None.  This is not an entry state.
    • All:
      • None.  This is not an entry state.
  • Can Read:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for requests associated with that bonebox
    • All:
      • bonebox by [service guid and] bonebox guid via orchard for requests associated with that [service and] bonebox
  • Can Update:
    • Single:
      • None.  This is a terminal state.
    • All:
      • None.  This is a terminal state.
  • Can Delete:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for requests associated with that bonebox
    • All:
      • bonebox by [service guid and] bonebox guid for requests associated with that [service and] bonebox

Failed (ERR)

  • Can Create:
    • Single:
      • None.  This is not an entry state.
    • All:
      • None.  This is not an entry state.
  • Can Read:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for requests associated with that bonebox
    • All:
      • bonebox by request guid [and service guid] via orchard for requests associated with that [service and] bonebox
  • Can Update:
    • Single:
      • None.  This is a terminal state.
    • All:
      • None.  This is a terminal state.
  • Can Delete:
    • Single:
      • bonebox by request guid and bonebox guid via orchard for requests associated with that bonebox
    • All:
      • bonebox by request guid [and service guid] via orchard for requests associated with that [service and] bonebox

Bad News and Worse News

I noticed none of the runners were feeding tonight.

I checked, and apparently all of them were klined shortly after turning the feeds back on.

They’re still finding the runners.  I’ll need to bring in new tools to obfuscate the other points in their field of vision.  The rest is more work than it sounds but it’s worth it in the long run.

First is T-ORCH, then DEIGE, both of which will need to be open source for this to really work.  Until then it’s not recommended to run a runner unless you like being klined.  If you’re willing to risk it, I’ve left synapse running.  I’ll also be creating an “easy deploy” script that will deploy tenta to a remote host for you.  Still working out how to do that securely.  I’ll also need to integrate Leptin into Synapse and fix a bug in Nerve that kills its data file as it’s happened twice now and lost a decent amount of logs when it happens.

Once T-ORCH is ready I’ll need to go back and rewrite most of the IRCTHULU services to actually use it.  This should help me harden the payload model for it since that’s not currently spec’d yet.  After all that’s done I’ll want Tenta to be able to plug into and use DEIGE.  I’ll also want Synapse to move from Synapse mode to Leptin mode via T-ORCH and vice versa and I’ll want Tenta to handle various commands from there.

Oh, the worse news.  That typo that lost 80,000 records — it was more like 200,000 as there were two negative events during the restructure..

On the bright side, since the new features automate the last few pieces, they’ll eventually wear out without developing new software.  They’ve got some smart people but I’m relatively confident that the only way they’re finding the runners still is that they’re k-lining random users on hunches and I’ve already seen some evidence of that.

Recap

  • Spec and design the new big-boy toy, T-ORCH.
  • Implement T-ORCH.
  • Integrate all services into T-ORCH (including Tenta layer).
  • Implement DEIGE.  Will probably involve a couple of microservices.  Am considering reimplementing HOWDI as a microservice component of DEIGE.
  • Integrate Leptin into Synapse
  • Fix data source processing bug in Nerve.
  • Integrate Tenta into DEIGE, possibly via T-ORCH for automated identity cycling.

-C

Data Streams are Glorious – IRCTHULU is Back Online

So, around the new year I disabled the feeds during an operation to shut off staff eyeballs for one of the networks that’s been targeting the users running the IRCTHULU runners.

Only.

I didn’t.  I built a new tool called Leptin, which, like synapse, pulls from the MQ, only it dumps to disk instead of the database.  Leptin will eventually be integrated into Synapse.

This was to give the runners a break while I focused on some work stuff without having to worry about someone finding something in the logs to identify the runners again.

Unfortunately during the development of Leptin I had to drop about 80,000 messages over a pretty stupid typo so we lost a couple days of logs.  I’ve got some safeguards in the code now that will prevent that from even being possible in the future.

As for Leptin, what’s especially cool about the design for this part is you can use the existing tool, nerve, to replay it back into the queue, and it’ll slurp it up.

Leptin itself is a bottleneck though right now.  I should have written it to be asynchonous like synapse as Pika in python is very, very slow.

Pattern Diagram for T-ORCH

I’ve finally decided on a usable pattern for general service orchestration in SOA.

I’ll be using this for the T-ORCH set of updates mentioned previously.  As well as probably in every other solution I ever create when I have a choice, to be honest.

Here’s a fabulous diagram made by a fabulous person:

You’ve got a controller, an API, and the service you want to control.  Behind the API is an MQ, a consumer service and a database.

Here’s how it works:

Controller

The controller registers a request or cancels a request.  It also can check on the state of a request.

API

The API does all kinds of stuff:  It reports on state for the controller, it receives request registrations and request cancellations from the controller, which it relays to the MQ, it polls the state of a request from the DB,  it sends new requests to the service being controlled, as well as sending request state updates from the service to the MQ.

SERVICE

The service, besides performing it’s normal function in the environment, gets new requests once they’ve been registered, it also acknowledges requests, and also marks requests as complete.

MQ

The MQ receives request registrations, request updates (state changes), including request cancellations from the API.

CONSUMER

The consumer relays same from the MQ and inserts them into the database.  This greatly simplifies the interactions in the whole design.

DB

The database receives creation, update, or cancellation requests from the consumer only.  It also provides the table used to report on request state to the API.

Request Lifecycle

The request lifecycle is:  registration, acknowledgment, and completion.

Open for Adspace

Would you like to sell adspace in the presenta example UI?

You got it.

Send me an email at punches.chris@gmail.com

*All revenue obtained will fund the surro linux project.