The NGI consultation will ultimately contribute to determine shared opinions and consolidated views of what the next generation internet should look like and which research and innovation paths Europeans shall follow to contribute achieving that goal.
This section of the public consultation platform contains analyses of what is being discussed and synthesis views of the outcomes of such discussions are going to be published, for interested readers to quickly grasp strategic elements of the process.
All reports, graphs, and concepts presented here are assembled by the NGI Consultation Team using contributions from all Users. The Consultation Team’s effort is directed to simplify and ease the reading process, to the benefit of all the NGI Community and citizens in general. Please, always refer to the original content posted for more information.
Search the NGI consultation knowledge base
I'm a researcher in Semantic Web technology at Ghent University – imec, Belgium.
We're want to address the ICT-24 call with a Web technology stack, emphasizing:
Next generation ICT technologies are seen as having major impact on our society. The convergence among these technologies such as 5G, SDN, AI, and NFV is highly important.
The power balances between providers and users created by IoT and Big Data needs to be framed by European values, as discussed at the workshop on Personal Data Spaces and Privacy.
Another widespread concern, raised in Hub4NGI D2.1, is the abuse of Internet technologies causing threats or limitations to democracy and liberty.
At the workshop on Personal Data Spaces and Privacy, there was a discussion about how people need to better understand the effects of giving away personal data.
It is important to have an ethical framework in place for artificially intelligent and autonomous machines.
One of the topics raised in the Hub4NGI D2.1 deliverable that is worth discussing further, is a lack of control over Data.
In the Hub4NGI D2.1, a significant issue regarding the responsible AI has been raised which is about whether and how can an AI algorithm be accountable for its actions.