Skip to main content

Instruments of Darkness

Recently, completely by accident, I picked up a copy of Alfred Price's Instruments of Darkness: The History of Electronic Warfare which (in the main) tells the story of the competitive development of radar capability during World War Two.   (It does extend into the 1960's but you can tell his heart really wasn't in the telling of the events in the later years.)

Regardless, it is an absolutely fantastic book that is as gripping as any novel - despite the subject matter being basically the history of the development of techniques for electronic detection and deception.  One of the elements that was interesting for me was just how relevant some of this story was to the world of cyber-security in the twenty-first century.  (I admit this could be just my way of trying to pretend that the time spent reading it really was effective research, but I don't think so.)

Anyway, some of the key messages:


  1. There are no absolute victories. At best, the developments provided a temporary advantage in a see-saw battle of measures and counter-measures.   The development of new solutions had to be constant.
  2. A combination of offensive and defensive measures were required.
  3. Sometimes your own capabilities could be used against you - for example signals emitted by electronic counter-measures on some fighters were used to track those fighters.
  4. Not all electronic measures were countered electronically - sometimes a change of physical tactics was the most effective response (night fighters infiltrating the bomber stream for example).  
  5. Electronic warfare was very much hybrid warfare with kinetic attacks on radar stations and the like.
  6. Technical skills combined with management that got the job done was a major source of differentiation.
  7. Intelligence regarding the enemy's capability was fundamental to success, including information from PoWs, acquisition of enemy equipment (hint: never put secret kit in a location with easy access for a commando raid), creative interpretation of seemingly meaningless breadcrumbs of data that helped provide a strategic picture.
  8. There were mistakes, dead ends, ideas that should  have worked but didn't - and ideas that should not have worked but did!  It was not a predictable environment and luck plays a part.
  9. Getting it wrong was very costly.



I guess I shouldn't be surprised at some of the similarities as cyber is really the mutant child of electronic warfare and information warfare so it should have some of the same characteristics, but I did find it bizarre that there were even competitions to generate new ideas and to get new developments moving (not unlike some of what is being done today for cyber).

I must admit though that in general I am not keen on extrapolating from history for an understanding of cyber issues, but in this case I will make an exception.
(Cross posted from my DPhil blog as it seems relevant.)

Comments

Popular posts from this blog

Non-Academic Publishing

As part of the PhD process that is now (thank heavens) rapidly approaching its end, there have been several discussions suggesting that publishing some papers in 'respected academic journals' would be 'a good thing'.   There are a number of chapters in the PhD that could be carved out and turned into stand alone papers, but I have to be honest, after nearly four years, I am more likely to tear them out and burn them. 

I digress.

Having spent four years reading academic journals, I'm  not so sure about the value of contributing to them.  If I want to have a pointless debate about issues of definition I can do that when taking the kids to school. (This morning's starter for ten:  "Now that my son is 18 and technically an adult,does this mean my daughter is an only child?"  Son's view is no, daughter's view - well you can probably guess).

I digress again. 

The issue for me (and if you are one of my 12 regular twitter followers you will know this)…

AI and the Internet: Sometimes it feels like the 1990s again

Over the past few months I have been (as well as editing the thesis) looking at the world of Artificial Intelligence developments, mainly in relating to working it into a module on an undergraduate BA course called Technology and National Security. (That's the name of the module which is 13 two hour lectures and 13 two hour tutorials, covering everything from the nature of war, through military ethics, to robotics, drones and automated weapons, with a quick detour through cyber-security and global security governance.  Very interesting if intensely depressing subject matter. Much more on this module in posts over the next few months as I finish up the materials).

Anyway, the over-riding feeling I was getting when looking at the state of AI developments was that it was just like the commercialisation of the Internet in the 1990s.    In brief:

1. Nobody is too sure how it is going to play out in the long term. 

Just like the commercial Internet in the 1990s there is a huge question …

So why are the NCSC so relaxed about Huawei?

I am struggling to work out why the NCSC seems to wedded to their capability to mitigate risks associated with Huawei kit in the UK telecoms network.  The current argument is very much about the risks are less about being Chinese and more about being not very good. It's worth noting that this is not why HCSEC was set up in 2010 and is in fact only a concern that was first fully referenced in the 2016 Oversight Committee report so the argument is possibly somewhat disingenuous.

It may well be that the reasons are completely mundane - it is very embarrassing for GCHQ to admit that after ten years of saying they can mitigate the risks to start claiming they can't and their arguably petulant reaction to the RUSI report shows that they do not like being publicly embarrassed.

So why are the NCSC/GCHQ so relaxed about Huawei?  Some (fairly random) thoughts:

1. They really can mitigate the risk of Chinese equipment in the UK telecommunications infrastructure.  This is what they would …