Author: fuzapost

  • AI Regulation: Housing Update March 2025

    Let’s go back to the start of AI and Regulation. In November 2022 we had ChatGPT launch, which caused a load of hype and interest in AI. Let’s step through some of the important UK government reports about AI which could have an impact on social housing providers.


    UK government’s AI Regulation White Paper of August 3, 2023 (the “White Paper”) and its written response of February 6, 2024 were perhaps the first position papers. The key thing here? There likely won’t be a central regulator for AI. Some government-recommended principles for all Regulators:

    • Principle 1: Regulators should ensure that AI systems function in a robust, secure, and safe way throughout the AI life cycle, and that risks are continually identified, assessed and managed. 
    • Principle 2: Regulators should ensure that AI systems are appropriately transparent and explainable.
    • Principle 3: Regulators should ensure that AI systems are fair (i.e., they do not undermine the legal rights of individuals or organizations, discriminate unfairly against individuals, or create unfair market outcomes).
    • Principle 4: Regulators should ensure there are governance measures in place to allow for effective oversight of the supply and use of AI systems, with clear lines of accountability across the AI life cycle
    • Principle 5: Regulators should ensure that users, impacted third parties and actors in the AI life cycle are able to contest an AI decision or outcome that is harmful or creates a material risk of harm, and access suitable redress.

    Next we have the AI opportunities action plan (Jan 2025). This positions the UK government’s approach to AI, and was much promoted in the media. In terms of Housing, there are a few key recommendations: That the government “Commit to funding regulators to scale up their AI capabilities, some of which need urgent addressing”, and potentially the most exciting suggestion for Housing is “Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector.”. The report also contains an implementation framework, suggesting an approach to AI is: Scan > Pilot > Scale. I’d say Housing as a sector is somewhere between Scan and Pilot at present, and varying wildly by individual organisation. An interesting part of this will be the Regulator for Social Housing’s future position on AI for the sector – something not yet published (many Regulators haven’t, this isn’t odd).


    Another newsworthy article, in Feb 2025 the UK Government declined to sign the Statement on Inclusive and Sustainable Artificial Intelligence at the Paris AI Summit. Signatories to the Declaration pledged to make AI “open, inclusive, transparent, ethical, safe, secure, and trustworthy”. I don’t think this directly impacts Social Housing too much but sets some of the trajectory for wider commercial developments.


    Quietly, an Artificial Intelligence Playbook for the UK Government was published in Feb 2025. This report contains decent advice for AI in public services. In my view these also hold well for Social Housing providers. The paper is short and worth a read. Key considerations they outline:

    1. You know what AI is and what its limitations are.
    2. You use AI lawfully, ethically, and responsibly.
    3. You know how to use AI securely.
    4. You have meaningful human control at the right stage.
    5. You understand how to manage the AI life cycle.
    6. You use the right tool for the job.
    7. You are open and collaborative.
    8. You work with commercial colleagues from the start.
    9. You have the skills and expertise needed to implement and use AI.
    10. You use these principles alongside your organisation’s policies and have the right assurance in place.

    The EU AI Act came into force in Feb 2025: This is a consideration for Housing, in that this thinking pattern may be adopted in the UK. It is quite large, to the extent it has its own navigation software for all the chapters, annexes, and recitals. An important (and in my view sensible) thing is having different risk levels with differing scrutiny required. 

    I’ll just call out one level: “Unacceptable risk, therefore prohibited, examples include the use of real-time remote biometric identification in public spaces or social scoring systems, as well as the use of subliminal influencing techniques which exploit vulnerabilities of specific groups.”. Housing providers need to act ethically and responsibly, and use of data systems including AI is no exception. 


    Artificial Intelligence (Regulation) Bill (2025), a private member’s bill passed first reading in the House of Lords 2 weeks ago. It seems unlikely to pass, but again may signal the future trajectory for AI regulation. According to Kennedy’s, the Bill suggests: 

    1. Creation of an AI Authority: the Bill proposes the establishment of a dedicated regulatory body tasked with overseeing AI compliance and coordinating with sector-specific regulators.
    2. Regulatory Principles: The Bill enshrines the Five AI principles, derived from the UK government’s March 2023 white paper, “A Pro-Innovation Approach to AI Regulation.”
    3. Public Engagement and AI Ethics: The Bill highlights the need for public consultation regarding AI risks and transparency in third-party data usage, including requirements for obtaining informed consent when using AI training datasets.

    Final thoughts: Don’t forget, we still have GDPR. Compliance with GDPR in all data systems remains a requirement. In addition, the risk of GDPR breaches has definitely increased as a result of genAI. Anecdotally, we’ve seen things like customer information being emailed out to personal accounts because ChatGPT is blocked on the network. Firming up on GDPR and on Cyber Security is always important. Keeping abreast of the latest regulatory and ethical considerations is also a good idea at this time, whilst also better understanding and achieving benefits from the tools we now have access to.

    Written by a human who is not a lawyer. This is not legal advice.

  • “Good” applications of AI

    Following an AI leadership session at Yorkshire Housing in July 2024, there was engaging discussion about good uses of AI – “good” in terms of both ethics and appropriate (or impactful) use of AI. I’ve put together a few examples here, covering a range of methods and application areas, to show the art of the possible and hopefully inspire those closest to the work to consider their own possible “good” uses of AI.

    Alphafold

    This does a scientific problem, modelling the 3D structure of a protein. Each of these would be a PhD to find experimentally, and at the time of writing over 200 million proteins have been modelled. This includes things involving DNA, and has massive applications for medicine and health. An additional “nice” thing about this is that the earlier code and all data has been made open source, so scientists can validate and improve the system. This is a highly specialised machine learning system.

    More info: https://alphafold.ebi.ac.uk/

    AI assistant for customer support agents

    Lots of companies provide this – in one business I’m working with a startup called Ducky at present. These are like plugins, in Ducky’s case in your browser, and based on the support query it surfaces relevant policy documents, and summarises an email thread. It can also draft a response for editing by the customer support agent. It saves the typing out of some boilerplate and we’ve found it really helps support agents get through things more easily (especially where an obscure policy is needed).

    More info: https://www.ducky.ai/

    AI alert for heart anomalies in hospitals

    This AI is hooked into ECG’s monitoring heartbeats of people in hospital, and alerts clinicians if there is a high risk of dying. This led to a 31% reduction in deaths during clinical trials, more than would be expected with a new medicine, even though this is “just” a monitoring system.

    More info: https://www.newscientist.com/article/2428674-ai-that-determines-risk-of-death-helps-save-lives-in-hospital-trial/

    Breast cancer detection

    This one is about computer vision. The system picks out whether a mammogram is likely to be displaying early stage cancer. The particularly smart part here is the implementation: Mammograms are studied as usual by two radiologists, then put into the system, which flags up any ones that they think have been missed to a third human radiologist, who then decides whether to recall the woman. This keeps the human central to the process, whilst still gaining the benefits of early identification of breast cancer. No jobs are at risk, and the human still catches some cases where the AI would miss an identification.

    More info: https://www.imperial.ac.uk/news/249573/new-ai-tool-detects-13-more/

  • Switching contexts: Will Machine Learning work on my data?

    There has been a lot about diagrams recently – this is about the machine learning itself.

    I don’t know about you, but one of the big problems I have is taking a guess at whether the latest ML approach is likely to work on the data I’ve got. Wouldn’t it be cool if we could test that, without having to implement the whole system?

    A very brief summary of the paper

    Take a CNN cat-photo classifier in Computer Vision. It works for photos from wikipidea, but will this approach work for my personal cat photos? We might expect that the features underlying would be similar (e.g. fundamental features of cats, such as the outline of their faces) but also some important aspects might not be (e.g. something about my own camera or photographing style might be different to what is found on wikipedia). This makes it hard to know if it will work. “Luckily”, in real life, we’ve given corporations so much access to our personal data that the classifier is already trained on real life photos :/.

    The paper explores how the complexity of the data impacts the effectiveness of the approach, for the same task. What that means is that we can just look at the data itself (rather than train and run the algorithms) in order to take a guess at how good a particular approach might be. In business, this can save huge amounts of time an energy. The paper doesn’t get all the way to a full testing rig for this, instead laying theoretical groundwork and conducting a series of trials.

    Exec summary

    We might be able to guess the effectiveness of ML approaches based on the data alone.

    Get in touch with fuza.co.uk if you’re interested in exploring this space further.

  • Why are scientists making diagrams?

    Diagrams take up a lot of space. In scientific publications, space is at a premium… so why do scientists use diagrams at all?

    In this paper, I explore a possible relationship between how people are thinking and how they are communicating. This topic has been quite widely explored (e.g. Extended Mind Theory), but what is less done is examining a collection of diagrams to try to see patterns in how the diagram creators might be thinking.

    One of the interesting aspects of this paper from a software perspective are the different abstraction levels which can be prioritised in diagrams:

    • Function
    • Data
    • Mathematics
    • Code

    In enterprise software architectures, we might also have e.g. non-functional or commercial considerations. As noted in this post, this is (or should be!) about the user’s needs!

    Exec summary

    We might be able to get insights into how people are thinking, from the diagrams they create. (And maybe the prioritisations in the diagrams we create might also help shape how people interpret our systems.)

  • Grammar of graphics?

    Happy New Year! Continuing the theme, I’ve another post about diagrams, more qualitative than the pre-Christmas quantitative citation count treat :).

    For decades, Yuri Engelhardt and Clive Richards have been researching and teaching about graphics, especially graphical representations of data and information visualisations. I took the latest of their work, VisDNA, and applied it to some of the latest Neural Network diagrams of Computer Vision and Natural Language Processing systems.

    In the paper and in the video below, I discuss a few examples and apply with VisDNA framework, suggesting some extensions to this “grammar” and using it to describe some of the ways scientists are communicating about complex systems using diagrams.

    Exec summary

    There are frameworks we can use to describe, reflect and critique on diagramming practices. See thediagramguy.com if you’re interested in finding out more

  • Science diagrams: Correlated with citation count

    This one was an unexpected and pretty cool result… It turns out that, in scholarly AI publications, including 2-3 diagrams is correlated with higher citation counts after 3 years. The meta- thing here is that it is a bit odd that diagramming practices were found to be at all related to citation count.

    There is another big block of work, which is the majority of my PhD thesis, which is about guidelines for NN architecture diagrams. It turns out that there is a correlation between compliance with >10/12 guidelines, and citation count. (This, of course, is only a small part of the evidence supporting the claim that the guidelines are useful – the majority being empirical user studies.)

    There is quite a lot of information… I’ve distilled it in the video below, and also made all the code and data publicly available.

    Exec summary

    Diagrams are measurably important in science, and may capture aspects of good practice.

  • Structuralism for diagrams

    You might have heard of structuralism in linguistics? The general idea, according to wikipedia, is that “elements of human culture must be understood by way of their relationship to a broader system.” This idea resonates with me as an agile practitioner and systems thinker, so I decided to explore diagrams more through this lens. It gets a bit deep, so I’ve made a video about it:

    Exec summary

    We can examine diagrams as if they are a language, and use this to uncover patterns and good practices.

  • Rethinking how we share scientific understanding in ML

    There was a really cool workshop at ICLR 2021, with the same title as this blog post, so I had to get involved! I wrote this paper about diagrammatic summaries for neural architectures.

    Exec summary

    The paper discusses some of the options for the scientific community for diagramming. The tl;dr; is that diagrams are used a lot but no-one thinks about it much. Sound familiar from business?! I’ve made a youtube video about the paper:

    The main takeaway of my view on this is that we should start from where we are with our diagramming, and be a bit “agile” about how we move things. Scientific publishing is a strange beast, with large organisations and slow processes, but with really disruptive thinking going on by the “users”, especially in ML research. A very brief summary might be:

    If anyone thinks slightly more about their diagramming, that is a good result.

  • How do we know if our diagrams are any good?

    How do we know if our diagrams are any good?

    The blog title wasn’t the title of the paper, but that was the main idea behind “Measuring diagram quality through semiotic morphisms”, published in Semiotica 2021.

    Exec summary

    In this paper, I summarise a history of diagramming, and propose some ways to measure diagram quality, “based on the properties of their encoding, pragmatic and perceptual morphisms”. What this means is we can try to split out:

    • The process of capturing what we want to in a diagram
    • The process of interpreting the diagram; and
    • The actual usage of the diagram to support a particular task.

    This is important in business too, just think about system architecture diagrams. There are a multitude of different things a system architecture diagram could capture, from the code itself, through to the modules, the dependencies (on packages or on other systems or data), non-functionals such as security, the tech stack, etc etc. Whether the thing captured in the diagram is any use to anyone depends on what it is used for and then, almost secondarily, on how well that information is laid out in the diagram. If I had one take-away from this paper, it is:

    Consider your user when creating a diagram!

    And finally

    This paper was particularly special because it won the Mouton d’Or award, for being the best paper in the journal in 2021! The judges very kindly wrote:

    The Committee members made this decision based on several factors. First, Diagrams have always played a significant role in the explication of theoretical concepts in semiotics. This essay provides an excellent overview of the semiotic diagram and its use as a pedagogical and instrumental visual to advance semiotic theory. It also applies taxonomy and category theory to assess the quality of a diagram. Second, this essay provides an evaluation metric for diagrams in semiotic theory. Third, this article provides an organized and comprehensible discussion and evaluation of the form and function of the diagram in semiotic research. Finally, the authors have applied taxonomy and category theory to the interdiscipline of semiotics to evaluate diagram quality.

    Semiotica Mouton d’Or judges (see also https://www.degruyter.com/journal/key/semi/html)

    This research inspired the creation of thediagramsguy.com – a consulting service which aims to improve diagramming in businesses.

  • Remember remember (to take time to share)

    Remember remember (to take time to share)

    Hello, and happy November!

    I’ve been fairly quiet the past few years. You know, covid. Stuff has been happening, but I’ve done a pretty bad job of communicating. There have been some really fantastic highlights, and I’m going to try to post every fortnight for the rest of the year to catch up!

    Fuza is still going, we’ve just been pivoting things a bit. These past 3 years have seen not just covid, but also the birth of my two boys, and the undertaking of a PhD. I’ve been writing a lot, just not blog posts, having conducted research for and written 10+ academic papers and a huge thesis. Anyway, here is a heads up on some of the topics you can expect over the coming months. And a random photo of me in the Brecons.

    A PhD whilst having two kids… that’s crazy!

    The next set will be about the research I’ve done during my PhD, one of the big things I’ve been up to over the past 3 years. We’ll “inspect and adapt” as ever, but I’m hoping to share something industry-useful about each peer-reviewed paper I’ve done. They are all about AI and/or system architecture diagrams.

    Doing COVID public policy research and a postdoc… whilst doing a PhD

    I’ve also been helping with scientific advice to support covid. Here’s a paper I wrote with some awesome researchers, mostly from London School of Hygiene and Tropical Medicine, about a study in summer 2021. We argue that views of the public should be gathered early when making modelling decisions – something that reflects agile product principles, and that we hope will help modellers to make better assumptions in their models of “plausible worlds”, in which there may be different interventions conducted. I’m planning on sharing some informal experiences of doing research to inform public policy (for UK COVID response), and of doing a postdoc on digital twins… Spoiler alert: I’m no longer doing either of those things.

    Building back better (sic)

    And, to complete the catchup, I’ll share what I’ve been up to these past 6months+ as CTO of the incredibly awesome PorthouseDean structural engineers (we make sure your fancy new extension or loft conversion doesn’t fall down). This company is really cool, not least because we have awesome plans for growth and disruptive innovation in the world of structural engineering calculations, so watch this space! I’m continuing to do research and also continuing supporting lovely clients and charities with their data systems. The overarching theme on all this work is making analytical decisions accessible.

    I’ll keep you “posted” on how this goes – and as ever please reach out if you’d like to chat.