The clear message from experts debating the regulation of Big Tech is that the sector needs reform but that the plans to regulate social media more tightly must be realistic and workable. Last night (28 February) I attended and took part in the Media Society debate on the subject hosted by Simons Muirhead, the leading West End media firm, which featured panellists representing not only the tech industry, but the broadcasters and expert commentators.

The BBC’s Technology correspondent, Rory Cellan-Jones, make the point that the debate has moved on a long way from a decade ago. Then there was “arrogance” on the tech side (freedom of the internet was paramount) and “ignorance” on the part of the politicians. Now politicians were much more alert to the issues but seemed awed by the sheer size and economic dominance of the tech giants, whose primary aims were to make money and growth. Rory argued that the most effective way to regulate them might be linked to breaking them up and reducing their monopoly power.

Big tech academic expert, Martin Moore (KCL, London) supported such a move. As regards harm on social media, Martin underlined the sheer immensity of the task: the flood of material being uploaded every minute and the huge number of monitors required to regulate it effectively. He pointed to how a form of effective regulation of social media was like the one in China through the use of algorithmic computer programmes (so “unacceptable” material was sent but was never received). But this method was blunt and untransparent and raised serious issues of freedom of expression if used by Western democracies. Martin said one way forward might be to develop the idea of the social media companies having a specific “duty of care” towards their users, akin to that owed by sports stadium owners towards spectators. He also pointed to the lack of political will in the UK to resource institutions like the Electoral Commission properly and equip it with adequate powers to deal with the issue of “fake news” and lack of transparency in the area of political advertising.

Vinous Ali, Head of Policy for Tech UK and representing its over 900 tech members, underlined that the industry were not opposed to tighter regulation. But it must be practicable and flexible, suitable to regulate all tech companies large and small, and not just be developed as a panic response to negative media coverage and aimed at solving perceived problems at one Big Tech giant, like Facebook. The industry needed for example some clear definitions of what “harm” was. Her comments echoed those made by the Internet Association in its letter to government on 1 March.

There was a general consensus that the proposals of the DCMS Select Committee to force social media companies to adopt and follow a new code of practice or face large fines were not well thought through. The government’s fortcoming White Paper (drafts of which were now circulating around Whitehall according to panel chair, Phil Harding) was awaited with great interest but considerable scepticism. As Martin Moore commented, there was no “silver bullet” to solve all these complex policy and legal issues.

Asked to comment at the end as Ofcom’s former Senior Standards Manager, I made the point that it was not possible to just map across broadcasting regulation to the internet. But there were important lessons to be learnt from Ofcom’s experience – not least that any definition of “harm” had to be flexible and be capable of being developed over time. I suggested the issues were so complex that it would probably be best to start with a system of more robust self-regulation, with the clear threat of statutory intervention if the industry proved incapable or unwilling to take effective action.