"CAPTCHA" is a term for a type of challenge-response test developed around the turn of the 21st century in order to verify a user was human. Earliest versions involve typing words shown in distorted images, which early computer "bots" (little more than software agents running scripts to perform actions in place of the user) could not identify properly. Later iterations often involve image comparison linked with a text description. Because this was a computer administered test to determine if a user was human, Captcha and similar technologies are sometimes called a "reverse-Turing test".
Obviously, as AI technology has gotten more sophisticated and ease-of-use and access to scripting knowledge has become easier and more widespread, the issue of verifying human users over weak AI or even basic software agent programs (which can be little more than a software script which posts "FIRST!" on every new XP upload by another user) has become more complex - but is still regarded as highly important, especially in polities with strict controls on AIs of all forms, from ALI to AGI. There are still a number of verification solutions.
First and easiest is many individual sites on the mesh, or services which might be connected to by a bot require a simple "Yes/No" verification if the subject is a transhuman (some phrase this as "sapient" or "not an AI", etc). Legal restrictions and basic code limitations prevent most ALI from directly lying when asked to verify - which works for many basic services. Of course, this does nothing to prevent ALIs which have been specifically coded to deceive or be able to break local laws, or many sub-AI agents who do not have any intelligence to shackle, just simple outputs and actions. For mesh sites which require more specific verification, users are often required to link an Ego ID, or another account, such as on one of the major social networks. This satisfies most that the user is a transhuman agent, and they can function normally. Building fake IDs and accounts for AIs and software bots is possible, but can be time consuming - and generally AI or scripts of their own can catch spammy or fake responses such as on a mesh forum and remove them, and most Social Networks have functions to flag spam or bot accounts and delete them. Many of them also tend to accumulate low rep scores naturally.
This more relaxed style is not enough for some groups and even governments, however. Very strict security or transparency habs may require a rather intrusive verification process which double-checks the Mesh ID of a device against a registry, identifying the type of device and who it is supposed to be owned by - and may reject devices with anomalous IDs or with obvious fake registry. Others may check metadata so that certain activities can only be performed from a device registered as a mesh insert or home server to try and make sure an Ego is behind it - which has a tendency to catch script kiddies stupid enough to try and run spambots or exploits right out of their own brains.
The most sophisticated and reliable of these systems, however, resemble the old CAPTCHA model, and other derivative standards. The simplest (but sometimes most time consuming) involves XP snippets. Software bots and ALI may be unable to properly "play" XP, and lack the complete neural modeling to experience all the elements of a full XP sensorium based on a biomorph. This catches many outright, and they are also generally unable to correctly answer the following questions about the system. X-CAPTCHA systems like this have very complex databases of possible snippets to play, but are still theoretically vulnerable to sweatshop AGIs or Infomorphs who might build a big enough database of correct responses to the challenge to bypass them - meaning that this option is not as secure for sites or apps without access to large servers, and usually involves contracting an external service. The best of these is an Extropian firm run by an AGI with an extensive amount of freelancers and forks to process and edit the XP library and randomly distribute around the system. The Argonauts also have a very strongly tested open source version - which unfortunately utilizes open source media libraries, meaning a handful of hackers have been able to get lucky in the past.
The Argonauts other solution isn't as guaranteed as the XP one, but is very good also. Based on advanced plagiarism checking algorithms developed for academic purposes, these systems ask short but subjective questions, and can check them for originality and authenticity - usually matching them with the responses expected from an Ego - and advanced ones can even keep a history and double-check similarities to a specific person, which can track alt accounts or stolen accounts. While it has some flaws, generally this system is sophisticated enough for those who wish to ensure that users aren't doing anything against terms of service involving bots or AIs. Others go old school, dropping the "reverse" and just administering a variant of the Turing test. This is common on tight-knit communities with many available active users. While the test itself is not perfect, generally most transhumans can ask the right questions to tell if a user is a basic chat-bot script or an ALI, especially if they are a more focused or specialized community with specific interest to check. Some argue this has a tendency for exclusivity - as many in a community may not claim a user is "authentic", but those with genuine passion can usually find it in other people, something an ALI cannot easily emulate.
It should be noted that Muses, having very sophisticated personality parameters, can often bypass these systems - however a Muse AI is almost never not paired with a transhuman and rarely acts outside their user's self-interest. This means that most locales and services do not necessarily mind if a Muse accesses their sites or apps, as they do so on behalf of a person - and they are still typically constrained by basic AI limits. Forks too, can complicate the issue. Alpha forks are just people, though they may not be distinct legal entities. Beta forks are more limited for space and focus, but their inclusion of memories and personality traits for context may mean they can pass many verification forms on the right device - but can often be matched to their Alpha. Deltas, being little more than a pruned AI template or skillsoft collection do not pose well as humans, and their amnesiac status means many subjective tests can catch them. Either way, use of forks to engage in fraudulent activity is usually restricted by local laws. Other locations or mesh sites don't care in the slightest if users are "human" or not - they utilize active and passive moderation to remove any harmful software bots or ALI users along with transhuman users who violate the terms of their services.