Infinite Undo!RSS

A blog about Tentacular Devops

staying-with-the-trouble-as-a-service in the post-information age


 
 

Software As Narrative

How-To Articles

Devops Reading List


CC Sharealike © 2017 by Noah Sussman

Jul
2nd
Mon
permalink
How Change Works In Large Organizations The Kübler-Ross change curve and the Six Phases Of A Project are two time-tested ways of visualizing how organizations cope with change!
Here the Kübler-Ross curve and the Six Phases are together for the first...

How Change Works In Large Organizations

The Kübler-Ross change curve and the Six Phases Of A Project are two time-tested ways of visualizing how organizations cope with change!

Here the Kübler-Ross curve and the Six Phases are together for the first time!

I hope this infographic helps to achieve every initiative in your portfolio!

May
12th
Sat
permalink
stochastic methodology teach a neural net to play planning poker with itself

stochastic methodology

teach a neural net to play planning poker with itself

Sep
3rd
Sun
permalink

Software Engineering As Hypothesis Invalidation

Venn diagram showing that testing is a subset of programming and that CDT and TDD overlap.


If testing software and writing code feel very different to you, it’s only because you haven’t written enough code yet. That is only my own admittedly controversial opinion. I believe that everything we call “software testing” is a subset of the activity we call “programming.”

Implementation is a test of a hypothesis. To implement a pattern in code one must first form a narrative or if you prefer a hypothesis. Implementation is itself a test of whether the narrative holds up.

Corollary: Consider that the full specification for a program and the program itself are the same thing. This implies you can’t design computer programs by up-front, complete specification. You are constrained by “the laws of nature” to begin with an incomplete hypothesis and proceed by testing the implementation of said hypothesis, using the results of that test to decide how to go about modifying either the hypothesis or the implementation or both, then repeating that process.

At all levels of the stack, it is always the case that complete specification is functionally impossible since such a thing could only exist in the form of a complete implementation. The largest Web application and the derpiest hello world program both have this quality: that they cannot be implemented by complete specification but must be built iteratively via (in)validation of hypotheses.

For a much more thorough exploration of this idea you can read or refer back to Programming As Theory Building by Peter Naur (1985).

Aug
4th
Fri
permalink
Jun
25th
Sun
permalink

Software Rot

rotten bridge | by Ahia on Flickr

Software rot

The software development life cycle is predictable in that any long-lived product will eventually outgrow some of its subsystems. For instance a Web service that begins life with a single monolithic database server will as it scales need the capacity increase that comes from a distributed database. A historical example can be found in Twitter’s original dependence on the ActiveRecord ORM, which over time was replaced with a variety of databases and services.

For historical reasons

In one sense this might be considered as a canonical definition of legacy systems: the system contains subsystems that are not optimally suited to day-to-day functioning despite the fact that at some point in the past those same subsystems did function optimally. There is a concept of Software Rot or Bit Rot that metaphorically encapsulates the life cycle phases that precede this sort of legacy system.

Frictionless yet it still wears out

The observed course of the software development life cycle is that features begin life in a “working” state — meaning that they satisfy the requirements agreed upon by an empowered group of stakeholders — but that inevitably the same features begin to exhibit bugs that are not in any way related to changes of code nor hosting environment. Software rot (as this phenomenon has come to be called) is caused because the requirements for features continue to change and evolve even after such features are in the hands of their users.

the system contains subsystems that are not optimally suited to day-to-day functioning

This is not a well-understood area of software production, nor does Computer Science have much to contribute by way of solutions. The problems are not algorithmic but environmental, social, aesthetic — in other words it is what programmers like to call a squishy problem because so much of the problem space is taken up not by software but by humans and their co-collaborators.

In programming, soft skill is hard

The idea of engaging with squishy problems is uncomfortable to a lot of programmers. I think this is because programmers currently do not have an opportunity to learn the heuristics that would allow them to distinguish good solutions from bad when it comes to human-and-social issues.

Taking software rot as an example, it is a well-known phenomenon long documented in the literature of software. Yet it has no commonly-agreed up on solution. Its management is not a topic of discussion in job interviews nor in performance evaluations (for the most part). The contraventions for software rot are not listed in general programming books nor taught in coding boot camps.

I do not believe that software rot is ignored as a topic because no one recognizes its importance. I’m pretty sure it’s ignored because no one feels comfortable giving advice about it, because almost no one has successfully dealt with the long-term requirements-changes and subsystem upgrades that go with solving software rot.

The knowledge about how this problem have been successfully dealt with are locked away in a couple of books. And those books are old, using server-side programming and Java as their example environment. That’s a hard sell to a junior engineer fresh out of a boot camp or undergrad program.

The recent movement toward systems thinking in software is a hopeful sign. But we need modern discourse that concerns how to deal with changing requirements over time. And so far such discourse hasn’t been forthcoming despite all the growth and hype about code over the last ten years.

imageimageimage
Jun
18th
Sun
permalink

Suboptimization is THE reason for technical debt. But I have to go outside of programming world to find discussion of it.

permalink

The uncomfortable truth is that dev is a complex relationship with an increasingly intelligent other.

The uncomfortable truth is that dev is a complex relationship with an increasingly intelligent other.

Jun
14th
Wed
permalink
—
An illustration of the “funnel” for FOSS developer engagement.

An illustration of the “funnel” for FOSS developer engagement.

Jun
12th
Mon
permalink

Miller is like jq for CSV and other tabular data

image


Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON.

Here’s how I use Miller to pipe CSV data into jq

jq is currently my tool of choice when it comes to processing all sorts of data. Except XML and CSV. XML is pretty well handled by xmlstarlet but I have never found a CSV parsing tool that I liked. I just make do with using jq to work with CSV and to be honest I find the results I can achieve to be substandard.

Anyway, Miller solves all that and is easy to install even if you don’t use homebrew and need to compile it from source.

mlr --c2j cat my_file.csv | jq .

It is that easy! Now my CSV is structured as JSON, which I have spent a lot of time learning to enjoy working with in jq.

Jun
11th
Sun
permalink
Jun
4th
Sun
permalink
Jun
1st
Thu
permalink

Software testing considered as a series of band-pass filters

These are a pair of slides from a talk I am currently developing.

May
30th
Tue
permalink

Teleonomy and the Quality Without A Name (QWAN)

Good systems converge on quality. They snap back into shape when deformed. Autoscaling is an example of how Web systems regain their good state after some event has impacted functionality.

Bad systems act the same. when pushed toward a better state they gradually edge back toward the bad state. Take for example the familiar case of a legacy system that has been put under test but gradually edges back toward untestedness again.

Teleonomy confirms what we intuitively already know. That some systems are easy to collaborate with and that other systems seem adversarial in our interactions with them.

The principle of Teleonomy shows that intent is not required in order for a system to pursue a goal. Neither collaboration nor antagonism requires intent.

Alexander is very clear in defining QWAN: Good systems pursue the goal of inner consistency. Bad systems pursue a state of internal conflict. when humans pursue a goal of internal consistency then we are pursuing the same goal as a well-functioning complex system. That the goals are aligned is sufficient to say that human and computer are collaborating. Intent is not required: the observed state of alignment qualifies fully as collaboration.

The “snapping back into shape” of complex systems is a practical example of system memory, one of the defining characteristics of a complex System as opposed to a simple one. Memory is also a characteristic of cellular automata such as the Abelian sandpile model.

So computers can be seen (without resorting to metaphor) as collaborating agents that not only have goals for the future but also remember the past. This is the observed reality of working on the computer system: it is difficult not to eventually regard computer systems as having motivations and goals which do not always align with our own. Teleonomy and sympoiesis are lenses through which this observed reality makes sense.

Dijkstra once said that the metaphor of “the bug that crept in while the programmer wasn’t looking” is intellectually dishonest; because any bug in the system must have originally been put there by a programmer. But Teleonomy implies sympoiesis between human and program and that in turn calls Dijkstra’s statement into question.

Bugs (and features and everything in between) do NOT originate with a human agent. Instead the characteristics of the system are defined in the collaborative interaction BETWEEN human agents and computer agents. Bugs may creep in while we are not looking because our systems DO have agency, according to the principle of Teleonomy.

May
29th
Mon
permalink

Quality Driven Development?

permalink