Category: Uncategorized

  • AI Regulation: Housing Update March 2025

    Let’s go back to the start of AI and Regulation. In November 2022 we had ChatGPT launch, which caused a load of hype and interest in AI. Let’s step through some of the important UK government reports about AI which could have an impact on social housing providers.


    UK government’s AI Regulation White Paper of August 3, 2023 (the “White Paper”) and its written response of February 6, 2024 were perhaps the first position papers. The key thing here? There likely won’t be a central regulator for AI. Some government-recommended principles for all Regulators:

    • Principle 1: Regulators should ensure that AI systems function in a robust, secure, and safe way throughout the AI life cycle, and that risks are continually identified, assessed and managed. 
    • Principle 2: Regulators should ensure that AI systems are appropriately transparent and explainable.
    • Principle 3: Regulators should ensure that AI systems are fair (i.e., they do not undermine the legal rights of individuals or organizations, discriminate unfairly against individuals, or create unfair market outcomes).
    • Principle 4: Regulators should ensure there are governance measures in place to allow for effective oversight of the supply and use of AI systems, with clear lines of accountability across the AI life cycle
    • Principle 5: Regulators should ensure that users, impacted third parties and actors in the AI life cycle are able to contest an AI decision or outcome that is harmful or creates a material risk of harm, and access suitable redress.

    Next we have the AI opportunities action plan (Jan 2025). This positions the UK government’s approach to AI, and was much promoted in the media. In terms of Housing, there are a few key recommendations: That the government “Commit to funding regulators to scale up their AI capabilities, some of which need urgent addressing”, and potentially the most exciting suggestion for Housing is “Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector.”. The report also contains an implementation framework, suggesting an approach to AI is: Scan > Pilot > Scale. I’d say Housing as a sector is somewhere between Scan and Pilot at present, and varying wildly by individual organisation. An interesting part of this will be the Regulator for Social Housing’s future position on AI for the sector – something not yet published (many Regulators haven’t, this isn’t odd).


    Another newsworthy article, in Feb 2025 the UK Government declined to sign the Statement on Inclusive and Sustainable Artificial Intelligence at the Paris AI Summit. Signatories to the Declaration pledged to make AI “open, inclusive, transparent, ethical, safe, secure, and trustworthy”. I don’t think this directly impacts Social Housing too much but sets some of the trajectory for wider commercial developments.


    Quietly, an Artificial Intelligence Playbook for the UK Government was published in Feb 2025. This report contains decent advice for AI in public services. In my view these also hold well for Social Housing providers. The paper is short and worth a read. Key considerations they outline:

    1. You know what AI is and what its limitations are.
    2. You use AI lawfully, ethically, and responsibly.
    3. You know how to use AI securely.
    4. You have meaningful human control at the right stage.
    5. You understand how to manage the AI life cycle.
    6. You use the right tool for the job.
    7. You are open and collaborative.
    8. You work with commercial colleagues from the start.
    9. You have the skills and expertise needed to implement and use AI.
    10. You use these principles alongside your organisation’s policies and have the right assurance in place.

    The EU AI Act came into force in Feb 2025: This is a consideration for Housing, in that this thinking pattern may be adopted in the UK. It is quite large, to the extent it has its own navigation software for all the chapters, annexes, and recitals. An important (and in my view sensible) thing is having different risk levels with differing scrutiny required. 

    I’ll just call out one level: “Unacceptable risk, therefore prohibited, examples include the use of real-time remote biometric identification in public spaces or social scoring systems, as well as the use of subliminal influencing techniques which exploit vulnerabilities of specific groups.”. Housing providers need to act ethically and responsibly, and use of data systems including AI is no exception. 


    Artificial Intelligence (Regulation) Bill (2025), a private member’s bill passed first reading in the House of Lords 2 weeks ago. It seems unlikely to pass, but again may signal the future trajectory for AI regulation. According to Kennedy’s, the Bill suggests: 

    1. Creation of an AI Authority: the Bill proposes the establishment of a dedicated regulatory body tasked with overseeing AI compliance and coordinating with sector-specific regulators.
    2. Regulatory Principles: The Bill enshrines the Five AI principles, derived from the UK government’s March 2023 white paper, “A Pro-Innovation Approach to AI Regulation.”
    3. Public Engagement and AI Ethics: The Bill highlights the need for public consultation regarding AI risks and transparency in third-party data usage, including requirements for obtaining informed consent when using AI training datasets.

    Final thoughts: Don’t forget, we still have GDPR. Compliance with GDPR in all data systems remains a requirement. In addition, the risk of GDPR breaches has definitely increased as a result of genAI. Anecdotally, we’ve seen things like customer information being emailed out to personal accounts because ChatGPT is blocked on the network. Firming up on GDPR and on Cyber Security is always important. Keeping abreast of the latest regulatory and ethical considerations is also a good idea at this time, whilst also better understanding and achieving benefits from the tools we now have access to.

    Written by a human who is not a lawyer. This is not legal advice.

  • “Good” applications of AI

    Following an AI leadership session at Yorkshire Housing in July 2024, there was engaging discussion about good uses of AI – “good” in terms of both ethics and appropriate (or impactful) use of AI. I’ve put together a few examples here, covering a range of methods and application areas, to show the art of the possible and hopefully inspire those closest to the work to consider their own possible “good” uses of AI.

    Alphafold

    This does a scientific problem, modelling the 3D structure of a protein. Each of these would be a PhD to find experimentally, and at the time of writing over 200 million proteins have been modelled. This includes things involving DNA, and has massive applications for medicine and health. An additional “nice” thing about this is that the earlier code and all data has been made open source, so scientists can validate and improve the system. This is a highly specialised machine learning system.

    More info: https://alphafold.ebi.ac.uk/

    AI assistant for customer support agents

    Lots of companies provide this – in one business I’m working with a startup called Ducky at present. These are like plugins, in Ducky’s case in your browser, and based on the support query it surfaces relevant policy documents, and summarises an email thread. It can also draft a response for editing by the customer support agent. It saves the typing out of some boilerplate and we’ve found it really helps support agents get through things more easily (especially where an obscure policy is needed).

    More info: https://www.ducky.ai/

    AI alert for heart anomalies in hospitals

    This AI is hooked into ECG’s monitoring heartbeats of people in hospital, and alerts clinicians if there is a high risk of dying. This led to a 31% reduction in deaths during clinical trials, more than would be expected with a new medicine, even though this is “just” a monitoring system.

    More info: https://www.newscientist.com/article/2428674-ai-that-determines-risk-of-death-helps-save-lives-in-hospital-trial/

    Breast cancer detection

    This one is about computer vision. The system picks out whether a mammogram is likely to be displaying early stage cancer. The particularly smart part here is the implementation: Mammograms are studied as usual by two radiologists, then put into the system, which flags up any ones that they think have been missed to a third human radiologist, who then decides whether to recall the woman. This keeps the human central to the process, whilst still gaining the benefits of early identification of breast cancer. No jobs are at risk, and the human still catches some cases where the AI would miss an identification.

    More info: https://www.imperial.ac.uk/news/249573/new-ai-tool-detects-13-more/

  • Structuralism for diagrams

    You might have heard of structuralism in linguistics? The general idea, according to wikipedia, is that “elements of human culture must be understood by way of their relationship to a broader system.” This idea resonates with me as an agile practitioner and systems thinker, so I decided to explore diagrams more through this lens. It gets a bit deep, so I’ve made a video about it:

    Exec summary

    We can examine diagrams as if they are a language, and use this to uncover patterns and good practices.

  • Remember remember (to take time to share)

    Remember remember (to take time to share)

    Hello, and happy November!

    I’ve been fairly quiet the past few years. You know, covid. Stuff has been happening, but I’ve done a pretty bad job of communicating. There have been some really fantastic highlights, and I’m going to try to post every fortnight for the rest of the year to catch up!

    Fuza is still going, we’ve just been pivoting things a bit. These past 3 years have seen not just covid, but also the birth of my two boys, and the undertaking of a PhD. I’ve been writing a lot, just not blog posts, having conducted research for and written 10+ academic papers and a huge thesis. Anyway, here is a heads up on some of the topics you can expect over the coming months. And a random photo of me in the Brecons.

    A PhD whilst having two kids… that’s crazy!

    The next set will be about the research I’ve done during my PhD, one of the big things I’ve been up to over the past 3 years. We’ll “inspect and adapt” as ever, but I’m hoping to share something industry-useful about each peer-reviewed paper I’ve done. They are all about AI and/or system architecture diagrams.

    Doing COVID public policy research and a postdoc… whilst doing a PhD

    I’ve also been helping with scientific advice to support covid. Here’s a paper I wrote with some awesome researchers, mostly from London School of Hygiene and Tropical Medicine, about a study in summer 2021. We argue that views of the public should be gathered early when making modelling decisions – something that reflects agile product principles, and that we hope will help modellers to make better assumptions in their models of “plausible worlds”, in which there may be different interventions conducted. I’m planning on sharing some informal experiences of doing research to inform public policy (for UK COVID response), and of doing a postdoc on digital twins… Spoiler alert: I’m no longer doing either of those things.

    Building back better (sic)

    And, to complete the catchup, I’ll share what I’ve been up to these past 6months+ as CTO of the incredibly awesome PorthouseDean structural engineers (we make sure your fancy new extension or loft conversion doesn’t fall down). This company is really cool, not least because we have awesome plans for growth and disruptive innovation in the world of structural engineering calculations, so watch this space! I’m continuing to do research and also continuing supporting lovely clients and charities with their data systems. The overarching theme on all this work is making analytical decisions accessible.

    I’ll keep you “posted” on how this goes – and as ever please reach out if you’d like to chat.

  • Report released: COVID modelling with Test, Trace and Isolate

    Abstract: The efforts to contain SARS-CoV-2 and reduce the impact of COVID-19 have been supported by Test, Trace and Isolate (TTI) systems in many settings, including the United Kingdom. The mathematical models underlying policy decisions about TTI make assumptions about behaviour in the context of a rapidly unfolding and changeable emergency. This study investigates the reported behaviours of UK citizens in July 2021, assesses them against how a set of TTI processes are conceptualised and represented in models and then interprets the findings with modellers who have been contributing evidence to TTI policy. We report on testing practices, including the uses of and trust in different types of testing, and the challenges of testing and isolating faced by different demographic groups. The study demonstrates the potential of input from members of the public to benefit the modelling process, from guiding the choice of research questions, influencing choice of model structure, informing parameter ranges and validating or challenging assumptions, to highlighting where model assumptions are reasonable or where their poor reflection of practice might lead to uninformative results. We conclude that deeper engagement with members of the public should be integrated at regular stages of public health intervention modelling.

    Link: https://www.medrxiv.org/content/10.1101/2022.01.31.22269871v1

  • 5 ways to be better at Agile

    Dear reader,

    Rather than our usual short-form blog post, I have published a longer LinkedIn Pulse article containing my 5 top tips for being better at Agile. I hope you find it useful and/or interesting. Please do give me feedback!

    Thanks,

    Guy

     

  • This is not a blog post

    This week, we’re shaking things up. Instead of the usual weekly blog, we have prepared a special something, which we will release tomorrow.

    Stay tuned for more details!

     

  • Research in Agile

    This post is inspired by a question from one of our readers, Lukasz. I’m going to outline how I find and examine research on organisations, agile/lean, and culture. Hopefully this will inspire you to dig more into what stuff is true and what is just crap.

    Finding the genuine facts among the huge volume of opinion is hard. It’s hard in politics, it’s hard in management, and it’s hard in social science. As a mathematician, I come from a world things are either true or not, and I continue to find exploring ambiguous and opinion-rife research challenging.

    Finding an interesting topic

    First you need to know what you want to know. Inspiration for what to research can be found in case studies, papers, blogs, books, conversations, your own experience etc. I personally find my ways of thinking most easily challenged by experience, books, videos, and conferences (probably because these are accessible!).

    Finding the research

    Once you’ve something you want to know, and the vocabulary to describe it, I’d recommend googling with specific terms. For example, if you are interested in the impact of management on team members, I’d recommend something like: “role hierarchy team impact” or something similar. Stay away from buzzwords like “management” or “agile”.

    Google scholar is good for finding paper titles, but often due to publishers you will have to pay for them. Knowing the title, if you search again specifically for those papers/authors you can often find a free version on the author’s academic page, or at least some related content.

    Assessing research quality

    1. Be cynical. Assume everyone is lying and check their “facts”.
    2. Beware sweeping statements. It is hard to have good social science that is very general.
    3. Use your noggin. E.G. Is the sample size big enough? Have they got a control group?

    Beware research fashion

    Just because something is popular to talk about (or highly cited) doesn’t make it good. A good example is Myers Briggs Type Indicators. Yes it is popular, and arguably helpful to some, but that doesn’t make it true or “the way to classify people”. Similarly some leadership styles are more heavily researched than others. The weight of research can be tempting to give in to, but keep sifting through, especially when the research is about models to help understand a topic (rather than an absolute truth).

    Finally

    Once you’ve something you think looks solid, a good test is to try it yourself! Run an experiment relevant to your situation, and see if you get results in line with the theory. Then tell other people what you’ve learned. (Yes I’m ignoring confirmation bias etc.)

    If you have other techniques, or questions or suggested improvements to my ways of researching, please do share them in the comments!

     

  • Pretty maths

    Bear with this post as it goes through some equations at the beginning, but it is worth it. We’ll be doing some of the calculations to get this picture:

    Mandel_zoom_00_mandelbrot_set
    The Mandelbrot Set

    This is the set of numbers “c” such that {\displaystyle z_{n+1}=z_{n}^{2}+c} is bounded. These z are complex numbers, which we’ll ignore for now. It is much easier to understand if we look at some examples:

    Let’s say c = -1.

    We start with 

    This is repeating, and the numbers are bounded.

     

    Let’s now try c = 0.5.

    We start with 

    We can see that these numbers are getting bigger and bigger, and it is not bounded.

    One more: c=-1.9

    It bounces around a lot, never getting very big or very small, so it is bounded. It is kinda fun to sit with a calculator and try this.

    Mathematicians call this kind of system “chaos”, as it is very sensitive to the starting conditions. Sometimes this is called the butterfly effect. Note that chaotic is not the same as random: in chaotic systems if you know everything about the initial conditions you know what will happen, whereas in random systems even if you knew everything about the initial conditions you wouldn’t know what was going to happen.

    Benoit Mandelbrot was one of the first mathematicians to have access to a computer. Hopefully you can also see now why Benoit Mandelbrot needed a computer to work these out. He repeated this for lots of values of c. The pretty picture we started with is really a plot of the set of c (called the Mandelbrot set), where the colours indicate what happens to the sequence (eg how quickly it converges, if it does).

    Mandelbrot Set with Axes

     

    You can zoom into the colourised picture to see how complex this is here. Lots of people (me included) think it is pretty cool. It is really worth taking a look to appreciate the complexity.

    Other than being pretty, why does this matter?

    Stepping back: This picture is made from the formula {\displaystyle z_{n+1}=z_{n}^{2}+c}. This is so simple, and yet gives rise to infinite complexity. In the words of Jonathan Coulton,

    Infinite complexity can be defined by simple rules

    Benoit Mandelbrot went on to apply this to the behaviour of economic markets, among other things. Later people have applied this to fluid dynamics (video), medicine, engineering, and many other areas. Apparently there is even a Society for Chaos Theory in Psychology & Life Sciences..!

    Further reading

    This article is good for more explanation of the maths.

    Apologies to any Pure mathematicians for the simplifications in this article.

     

  • Collecting data, the smart way

    Here is my short, simple step-by-step guide for smart collection of data.
    Step 1) Determine what matters, ideally in accordance with a Company or Product vision
    Step 2) Come up with as many different ways of measuring aspects that matter or impact what matters
    Step 3) Collect data! Ideally setting up easily repeatable ways of this, and automated wherever possible
    Step 4) Form hypotheses: how do you believe certain measures affect your vision? What do you expect the data to tell you?
    Step 5) Collect more data
    Step 6) Test your hypotheses
    Step 7) Collect even more data. Quite simply, the more data the better.

    IMG_20160123_124341.jpg
    Bletchley Park: the more data the better

    Let’s look at an example. Suppose a government manager wishes to improve the innovation of her employees.
    Step 1) Target: what matters here is “innovation” – which we define more precisely in…
    Step 2) Measurement: Some of the ways in which innovation can be measured are volume of ideas, number of staff submitting ideas, percentage of staff submitting ideas, value delivered, employee perception of innovation produced, manager perception, and customer perception (in this case the public would be the customer), etc.
    Step 3) Collection: This involves ensuring that things are centrally recorded and surveys are done to create a baseline.
    Step 4) Hypothesis: It is suggested that an innovation rewards’ ceremony would help to improve the morale. Note that it is important that the hypothesis is formed after the first data collection, as we want to be able to dig deeper into anything interesting we find. This means that often we need to collect more detailed data more specifically targeted towards proving or disproving our hypothesis.
    Step 5) Collection: A more accurate, probably quantative, measure of morale is added to the existing survey.
    Step 6) Action: An innovation rewards’ ceremony is run.
    Step 7) Collection: The survey is conducted again – morale is measured as having improved. Success! Note that the other measures (e.g. the volume of ideas produced) are now also being consistently measured and can be easily tracked throughout future experiments.
    After running through these steps we can ask ourselves the following questions.
    What do we now know?
    • Key measures, and how they are changing with time
    • Whether the key measures remain the same, or if other aspects should be considered.
    What can we not imply?
    •  “Correlation does not imply causation”: just because a trend becomes apparent this does not mean that one workplace modification is the main contributor to a measured difference. For example, if morale improves during the summer months this may have been due to nicer, warmer weather rather than any particular managerial decisions.
    • We cannot assume that any trends apply in similar cases elsewhere: our sample is too small and too specific. Luckily, a full research paper is not the goal here!
    As some of you may have noticed, this is very similar to the Six Sigma methodology of “Define, Measure, Analyse, Improve, Control”. It also mirrors the “Plan, Do, Check, Act” process found in many management handbooks.
    The detail of the steps you yourself follow is not particularly important here, all I am really suggesting is to:
    • Ensure you are working on what really matters.
    • Add wider data collection before directing all your attention to one particular area. This way you will have a richer understanding of the problems and opportunities.