8000 Workshop: “Child Safety on the Web” · Issue #505 · w3c/strategy · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Workshop: “Child Safety on the Web” #505

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tjwhalen opened this issue Apr 23, 2025 · 3 comments
Open

Workshop: “Child Safety on the Web” #505

tjwhalen opened this issue Apr 23, 2025 · 3 comments
Assignees
Labels

Comments

@tjwhalen
Copy link

Describe the web problem you think needs solving

Governments around the world increasingly look for methods to protect the safety of children as they participate online, and recently have been passing mandates for sites to identify or verify the age of visitors to alter functionality or exclude them from content. Those mandates often introduce risks to privacy and free expression, including surveillance of everyone's reading and writing online, breaches of sensitive data, preventing children from accessing important information, or blocking/taking down websites altogether. It would be helpful to find ways to support parents and children as they navigate access to appropriate Web content, without increasing privacy risk or impeding free expression.

Describe some use cases and requirements in detail

  • how do parental controls or user safeguards work on the Web and how can we improve them?
    • can we find alternative approaches that don't, for example, rely on the collection of government IDs?
  • focus on architectural considerations in the workshop (rather than policy debate)

Workshop is an opportunity to:

  • propose and review potential solutions and how they could be adopted or deployed on the Web
  • map out the space of solutions (in different layers of the stack)
  • better understand some of the proposals being developed (e.g., from major tech firms)
  • brainstorm technical designs and identify areas of interest for further standardization work

Possible topics:

  • standards for sites to self-label content, through appropriate metadata
  • ways for users to communicate to sites that they are trying to avoid certain types of content ("safe mode")
  • how we can adapt protocols and formats to help endpoints provide better 'hooks' (i.e., as in RFC 8674)

Potential outcomes:

  • a shared understanding of the range of different approaches available.
  • an understanding of the technical challenges in each approach.
  • an understanding of the governance and policy challenges in each approach.

There may need to be some preparatory work done to survey existing options to lay the groundwork for a productive workshop.

^^
Above material excerpted from email thread, much thanks and all credit to contributors, cc-ed here for further discussion: @npdoty, @mnot, @martinthomson, @sandandsnow, @bvandersloot-mozilla

@iadawn
Copy link
iadawn commented Apr 24, 2025

Just to add that this has quite a bit of overlap when considering safety on the web for people with disabilities. In particular where they have care support or care workers who assist them in online dealings.

@wareid
Copy link
8000
wareid commented Apr 25, 2025

This is a topic that the Publishing industry has a big interest in, particularly on the retailer/reading platforms side. Over the years we've explored a variety of ways to figure out how to organize and filter content for specific audiences. It might be worth looking at some of Publishing's metadata standards for how Publishers/Authors can identify their content:

On the retail side, many have tooling or checks in place to flag adult content if it is not already identified through metadata as being for adults. There is some challenge in establishing and managing these checks and rules because laws differ around the world on what ages content can be shown to, what content is considered inappropriate, and how to differentiate content. A classic example is "Lolita" by Vladimir Nabokov in comparison to "50 Shades of Grey" by E.L. James and what audiences those books can be shown to.

@simoneonofri
Copy link

An analysis/options from European Parliament.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Investigation
Development

No branches or pull requests

5 participants
0