On Mon, May 6, 2013 at 6:24 AM, James Jones <
data.p...@gmail.com> wrote:
> So a quick read about RedMine, Request Track and a comparison of the
> functionality leads me to believe these would be human interface tools
> and/or mostly for system monitoring and issue resolution/tracking
>
> I'm still very interested in building a roadmap/blueprint for how the
> machines see and resolve problems and how this can be a cumulative library
> of prioritized applied solutions - a hierarchy of alerts that induce certain
> canned logic to be run until either the problem is resolved or an alert is
> passed "up" to human logic - in this type of system it can start out very
> "stupid" (frequent alerts) and "smarter" logic can be added to automate
> issue resolution over time so it becomes progressively more automated
> without having to solve everything at once when the system is designed....
> my 2 pesos...
>
Well, we are on the same page. RT or Redmine are just escalation
levels for humans for sure. Although, one *could* mix human/machine
intervention. So that if no human can answer within "x" amount of
time, a machine takes over (maybe just stopping production. It all
depends on the scenario). In practical use now, you'd have a mix of
some end users that are comfortable with lots of automated
intervention, all the way to some who would want to intervene at every
hiccup. I anticipate that in early adoption of cubespawn, there would
be lots of human intervention. So, that is why I imagined these tools
would be immediately useful.
However, to your point, there could be a stack of applications.
Machines could simply respond to sensor levels, log-level errors, etc.
Machine error response is the kind of thing that software testing
approaches would be really great at simulating and refining. This is
something I think I can help with, too. There are already really good
all-purpose software testing tools that could likely be used as-is for
automated testing of:
* Building software stack (a mix of local and system unit and integration tests)
* Simulating production runs and error response (possibly some more
local and system unit/integration tests, plus some server/client load
simulation runs)
There are also some scenarios that can be modeled with tools like
http://ccl.northwestern.edu/netlogo/ before building hardware or
software, just to see if you would run up against known limits of
networks and information systems, resource distribution, etc.