Probably you know my position about #GDPR ( see https://robertolofaro.com/gdpr ): it is evolutionary, not revolutionary.
And, as many other recent regulations, it relies more on data and perfect data-driven oversight than XIX century laws and regulations.
Still, while our technological capabilities would enable that, this would require a complete overhaul of our underlying regulations to be carried out with efficiency (and efficacy).
I shared last year what was said from a representative of the Italian Data Privacy Watchdog: that their reaction time was expanded by the need to produce everything in English and Italian, while they did not have enough staff with the needed skills (i.e. at least paralegals in both languages).
Personally, over the last year I lost count of how many cases of misinterpretation of the simplest element that would be easier to check automatically, i.e. online forms pre-filled.
Beware: "would be" does not imply "can", as those online forms pre-filled with options irrelevant to the task at hand are scattered on countless "channels".
Unless the European Data Privacy watchdog is willing to finance an army of webbots scouting sites for violations, relying just on users notifying manually, and people from the various Data Privacy watchdogs manually reading the materials and converting it into something useful for a ruling generates few issues.
Ensuring, in each manual re-processing of the information, that each intervening human does convey the information without adding "noise" (or even just human personal "grudge" or unsustantiated "rumor") is yet another degree of complexity added to picture.
Imagine now the eventual checking of a more complex issue, i.e. "privacy by default and by design".
The 4th industrial revolution is about data and their interpretation, and of course having systems able to adapt to data, data trends, market trends, and all influenced by interaction between data.
Frankly, since the 1980s I heard so many times "paradigm shift" and "revolutionary" in reference to technology, that I lost count.
This time it might be different, and I am not referring to GDPR, and for one simple reason: to paraphrase somebody else, technology is now too important to be left to technological experts.
A short data-centric digression
Already in the 1980s I saw first-hand the impact of data on decision making for senior management, by using "Decision Support Systems on your desktop".
Recently I was reviewing the models that I designed then (at 23-24) by extracting knowledge on their decision-making from senior managers that often had double my age.
If compared with even your own smartphone, the quantity of data and computing power available then was primitive- but in many cases I worked with senior managers and Cxx level who had for the first time direct access to data and, if they wanted, they could actually compare scenarios, instead of waiting for some clueless junior bean counter assemble yesterday's apples and pears.
The issue, already then, was the quality of data.
But, working with people who had significant business savvy, they could actually spot meaningless data, and differentiate between irrelevant outliers and trend-markers.
And they could also afford to be skeptical about choices proposed by models, instead of, as routinely done by IT staff, considering how many hours had been spent on building something that was trashed in minutes.
A quick introduction to the concepts related to those issues was within a 1988 book by Rockart and De Long, "Expert Support System", based on a research at the MIT's Sloan School of Management CISR carried out in 1985 and 1986; if interested, in my personal library I have similar studied from other perspectives from the same era, e.g. from NATO and others on decision-making and the impact of computers and data quantity on decision quality, but discussing those books is outside the scope of this short article.
Generally, the longer the "feeding chain" to deliver something, the less easier is, in many corporate environments, to accept that lessons might be derived, but it is not worth repair or "fix".
I remember a discussion about a report within a large management reporting initiative, a report delivered along with dozens just because it was feasible and easy to derive from another one.
Manager one: "it was delivered at zero cost"; manager two: "and it has value added minus two".
In the 1990s, the quantity of data expanded, memory became cheaper, and the number of desktops with "decision support technology at your fingertips" dramatically expanded.
Moreover, instead of selling proper managing consulting that would require years and countless projects to develop, delivering bells&whistles on each desktop required skills that would take often mere months to develop; furthermore, "scalable" skills.
Or: try adding one hundred management consultants able to redesign organizational structures in one year... it is not feasible.
Instead, adding one hundred experts in developing dashboards by assembling "lego bricks" and data in the same timespan is feasible.
But once you expand the supply, you need to expand demand, and expanded it was- apparently democratizing access to data to lower levels of management.
The main issue: not necessarily those watching all those bells and whistles had the ability to assess the quality of information, and most decisions upstream often relied on data "filtered up" and augmented by intermediary levels.
Nowadays we can have access to data as soon as a transaction is carried out, and there is often no need for human intervention (i.e. adding "noise") to give access to data higher up in the feeding chain: but the higher you go in any large organization, the lesser the operational understanding of raw data without interpretation.
I discussed the point in another mini-book that you can read online ( https://robertolofaro.com/relevantdata ).
End of the digression, and let's start "unbundling" the title from the end.
Part 3 of the title: The #hype #alert on #4thindustrialrevolution
Let's focus just on AI (Artificial Intelligence).
Just as a side note on another element of current technological trends: the level of hype on blockchain is well represented within my Linkedin stream: plenty of posts, but... the sources are either those selling solutions or projects, or teaching about it.
Also comments almost never come from the Cxx side of the customers: at most, those that companies appointed to be a "blockchain guru" just to show the Board that the company isn't missing a train (I saw the same in the early 1990s with methodologies, and mid-1990s with knowledge managers).
The main issue: too many announces of "revolutionary impacts" actually create more hype than needed.
Obviously, hype is good for business- but not necessarily for businesses and society at large.
There is a catch: applying the 4th Industrial Revolution to deliver real value without wasting resources would require a more active (and continuous) involvement from the "customer" side.
Specifically: there are areas where applying parts of artificial intelligence could (relatively fast) deliver results if technology were to be at least understood by those on the buying side.
On Artificial Intelligence, instead, hype is meeting at last reality- probably because there have already been applications (e.g. on HR, to filter CVs from applicants, filters that showed some bias) that are more visible than the applications I studied about decades ago (the most famous at the time was the "expert system" used by Amex to authorize or require confirmation for credit card charges).
When decades ago asked to a professor in my university on simulation what he suggested, and if he could lend his copy of "Godel Escher Bach", he replied with two "no": as the focus of his course (that would have been few years down the road if I had not dropped out first for the Army, then to travel around for work) was discrete simulation, not continuous simulation (my idea was to create an ecological simulation game, simulation at the time was mainly moving form state A to state B).
If you talk about "artificial intelligence" nowadays, for both the general public and most IT experts in reality you are talking about a wide range of options.
From something closer to the old "rule-based" expert systems (e.g. extract expertise from experts, and convert that into a kind of Q&A software that guides you to solutions, a kind of software-based "5 Why"), to autonomous driving vehicles able to make decisions.
In all cases, what I saw in the 1980s and tried to develop in PROLOG back then (the need to be able to "explain" how a DSS provided results, e.g. how it changed values on costs and revenues across the product and channels mix to converge on a specific target goal), is now even more critical.
Some of the applications currently discussed require that "continuous explanation"- as e.g. data from the interactions of a vehicle with traffic and the environment pour in too fast for any human "supervisor" to revise decisions.
So, "explanation" in that case would be closer to what in other business areas is called an "audit trail", i.e. keeping tab with all decisions to revise them; of course, the amount of data would be huge- so, this could require some specific technological on-board equivalent of a "black box" for airplanes.
Other applications instead are focused on specific events (e.g. suggesting what to eat for lunch), and are slow enough to enable human supervision.
In the end, if, say, Amazon suggests you to buy something, when you say "yeah" or "nay", you are "supervising" the AI (and, incidentally, providing information to Amazon so that it can refocus future suggestions).
Creating a "bubble" of experts that listens marginally, initially to the customer, and then delivers a "blackbox" that requires a full complement of full-time "guardians of the technology" who have no clue whatsoever about the business, but dictate what can and cannot be done on that side... it is not my approach.
Moreover, while it was just a nuisance with other technologies, if the "bubble" approach is used to "seed" or maintain artificial intelligence solutions this might generate at least two issues.
First, consistency not with the corporate culture and ethos of the customer, but of the supplier, or even of the developers (as in some recent cases).
Second, due to a superficial "injection" of informal side of the customer culture, solutions might evolve in a way that is inconsistent with specific business domain expertise.
Over the last few days, I discussed with some contacts in Turin (and few tribes of observers) an application that could actually deliver some value...
...at least, more than has been delivered now by other initiatives, and focusing on an approach that builds on successes/failures (overall, lessons learned) to expand across an organization while delivering a cultural change, not force-feeding a big bang.
Now, fast forward to today, Italy in late July 2019
In Italy, e-government at last supposedly met digital transformation: the only country I lived and worked in where digital transformation of processes actually expanded the time needed to obtain a result.
And costs, if you consider that the increased number of interactions linked to the "Italian way of digital transformation" expanded the number of bureaucratic exchanges.
You would not see that into statistics, that probably could present nice data about number of days needed to process something (easy: filter access, transfer upstream most of the pre-processing, and when you receive something that needs to be processed, the task is simpler, faster, non-ambiguous).
That this actually reduces productivity by adding "shadow processes" before you can even interact with the State or local authorities seems to raise almost no concern in Italy.
There are various reasons, but probably the main one is that routinely attempts are done at doing a "big bang", without assessing the degrees of freedom and SWOT within the parties involved, so that, if, say, you have an innovation czar with a limited tenure, resistance to change can actually turn vicious.
Instead of just resisting, jump on the bandwagon, change the façade, and, as soon as the innovation czar either makes a false step or leaves... adopt the visual elements that make everybody able to claim "innovation done", and in reality plug that into what you had long before.
It is not the first case when "modernization" in Italy sounds closer to building Potemkin villages than real change: we had plenty of innovation attempts, and a favorite of any government is the mantra of "standard costs" across the country for anything ranging from service to product purchases within central and local authorities.
Another favorite is appointing a streamlining czar, an anticorruption watchdog, and so on and so forth.
As I said to a friend who asked how were the issues with organized crime (we generally call it "mafia") in Southern Italy in the early 1970s (I lived there a couple of years until late summer 1972), at the time, the common parlance was that mafia was an invention of communists to collect votes.
On Friday I posted an article on that long standing issue #Italy: #war of #attrition on #organized #crime and the #gentrification of the latter.
I would like to start with the end of that article to shift to a potential application of AI: "As an example, we do complain a lot about the sheer number of laws. And our Parliament as well as local authorities keep churning out new laws.
But, in our times, this could actually become a benefit."
Italy has more laws than many countries, and keeps churning out laws.
The interesting element is that most laws actually are signed and then require an "implementation mechanism" to be defined by bureaucrats.
Well, if you look at, say, the latest half dozen governments, I remember seeing an interesting table showing how many laws still lacked that "mechanism" a couple of governments down the road.
The first application is nothing really akin to flying cars or smart cities: it is just the result of what I saw while working on the "customer" (as citizen and for my own or somebody else's business) and the "supplier" (as project manager and business analyst on few government projects involving control, expenditure, risk) side.
So many laws and so many regulations imply oversight, as in our tribal society we Italians put tribe and family before common good.
And what better than a bit of machine learning to support and keep "clean from tribal influences" our controllers, local and central?
Part 2 of the title: quis custodiet ipsos custodes
Since few decades ago, mayors, who used to have a kind of administrative controller in house (provided by a central government ministry), are self-directed.
Moreover, there have been changes here and there to enable expenditure with laxer, ex-post controls.
There have been repeated attempts to create a unique purchasing function working across all the central and local authorities- but obviously there are exceptions, thresholds and conditions that enable bypassing this entity, and also at the regional government level regions created their own purchasing entities, e.g. focused on health or shared services.
And, as in any country in the world, any purchasing function is a power centre- so, you can imagine in a country where the "spoils system" after an election isn't limited to few top managers, but traditionally extends up and down any structure supported by taxpayers.
Do we have a control mechanism? Yes, but mainly on formal issues and after the fact, and, frankly, read the commentary above about reporting data privacy violations, and multiply by few orders of magnitude.
On corruption, or potential corruption, by increasing the expenditure at the local level with laxer ex-ante controls, often the cost of a potential investigation would fairly exceed the value that might be recovered, even if the controlling organizations had the resources needed.
And, with over 8,000 towns and villages, not even a (properly working) Gosplan might really produce results.
As I learned from companies in the late 1980s, in any bureaucracy the trouble isn't corruption- is when corruption alters processes to generate the need or motivation for corruption.
Not too long ago a leading Italian news magazine said that Italy shifted from being a country where there was corruption on tenders, and the like, to a country were micro-bribes are endemic (easy to understand: the more laws rules regulations edicts you have to comply with, the higher the chance is that somebody is running afoul of some sometimes).
Also, with so many laws, regulations (also local authorities are quite happy in churning out whatever they are entitled to regulate about), etc, we have plenty of rulesets that could be easily cross-checked, as we are also happy in producing a paper trail (often in three copies) for any interaction between any office.
My definition of "corruption" has nothing to do with money: paraphrasing what somebody wrote over 1,000 years ago, you can corrupt by action, hand, or thought- and all three might be "degrees of freedom" that somebody will use to obtain an advantage.
But... move fast forward: now that anything leaves an electronic as well as paper trail, in Italy we could redefine the concept of "revolutionary", "paradigm shift", etc- doing something that might sound dull and run-of-the-mill, but, in our tribal economy, would create issues to anybody trying to check.
Machine learning could actually help resume a pre-emptive check on any bureaucratic action: be it a promotion, a reassignment, expenses reports, onboarding a supplier, obtaining people on secondment from another administration, etc.
As we have way too many "regulation generation centres", a first use could be to cross-check if there is a convergence, or if some "hotspot" here and there is out of the line (e.g. generating way too many authorization points, as each authorization point has the potential of turning into a bribe-collection centre).
Part 1 of the title: #reinventing the #business of #government
A question I was asked is: who should lead such an initiative? My approach to decision support systems and to cultural/organizational change was always that the customer should ensure continuity and absorb as much knowledge as needed to get on with everyday business life.
I think that experts should not drive initiatives, should provide their expertise to the driver, who, supposedly, knows what the overall, systemic purpose is that the experts are going to be just an element of the solution for.
It might well be that an expert is called to drive, but, then, (s)he should avoid turning delusional by assuming that (s)he is a know-it-all: even experts need experts, both in specific niches of their own expertise, and certainly on other domains of expertise.
Frankly, I would rather see this application start on a specific subset of processes that have a larger "target audience", on few sample territories, to "tune" on a reference sample, and then extend to have a "target" and "national/by territory" baseline, and then be able to identify, on new actions, the "outliers", i.e. those actions that are out of the line.
This would at least enable to "manage by exception" (and without any tribal mediation to barter interests) controls, focusing just on those out of the ordinary.
Yes, it would not be perfect- and probably a certain degree of "incorrect" behavior would still be there, but, frankly, it would be a pragmatic starting point- equalizing to a shared minimum corruption or misuse of "common" resources would be a first step toward being able to identify an implement improvements.
Otherwise, we will be forever stuck with "exception as the rule".
What we need is people able to set, sell, and try to implement a political platform; delegate while surrounding themselves with people worth delegating to; accept that mistakes can be done but "the buck stops here".
Our excessive number of laws? It could actually be a boon to streamline our collective governance and overcome the "who is controlling controllers?" paradox.
Risks? Well, such a system would be just informative- to pre-select areas of further scrutiny to then, upon human decision, transfer as actions (e.g. investigations to start) to already existing controlling entities.
No need to fear either a "Terminator" or, worse, a "Colossus - The Forbin Project" scenario...