Inverting the Web

We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.

Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.

Inverting the Web

Which is why I think we need to look to another web/network, which is people's social connections.

We just need a way for everyone to maintain and share a personal index that contains both information from automated scraping (i.e. keywords, SIPs, etc) and any notes or other metadata the person cares to attach to the page, domain, etc.

Of course, an index large enough to serve most queries you might be interested in would be pretty big, similar to the size of the document set.

Inverting the Web

The inverted index of the documents themselves is the biggest part. And since it's easy to verify, perhaps that part can be handled in a more centeralized, possibly federated, fashion?

@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).

Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.

My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...

@freakazoid ... and reputation.

But a mechanism in which:

1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known

... might disrupt things a tad.

Somewhat more:

NB: the reputation bits might build off social / netgraph models.

But yes, I've been thinking on this.

@dredmorbius @freakazoid
Isn't yandex a federated search engine? Maybe @drwho has input?

@enkiv2 @dredmorbius @drwho You're probably thinking of YaCy.

@enkiv2 I know SEERX is.

There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.

Being dropped by Firefox BTW.

That provides a query API only, not a distributed index, though.

@freakazoid @drwho

@enkiv2 I know SEARX is:

Also YaCy as sean mentioned.

There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.

Being dropped by Firefox BTW.

That provides a query API only, not a distributed index, though.

@freakazoid @drwho

@dredmorbius This is not a fully fleshed out idea yet, but the "L" was the important bit. People generally don't care about the location of the content. They care about the content of the content, and other stuff about the content like the author, etc.

Just think about how people generally navigate the web these days. They don't type a URL into their addressbar or click a bookmark. They type a search query into their address bar, which will generally bring up Google results.

@freakazoid Re: navigation.

1. Google are trying hard to kill off the URL.

2. There may be user-pattern based reasons to do just that.

3. URLs and DNS map ... poorly ... to meatspace notions of locality and identity. In large part due to the actions of websites, search engines, browser devs, SEO, and domain registrars.

4. A namespace with at _least_ a half-million entities and little sensible structure ... is far beyond human scale.

5. It's mostly reputation.

@freakazoid Sorry, what "L"?

I'm not seeing reference to this and am confused.

@dredmorbius Sorry, I mean the "L" in "URL". It's a uniform resource *locator*.

Google is trying to build the thing I'm talking about, only it will be designed to give them even more power than they already have by hiding URLs entirely, making it so that there's no chance at all to navigate the web successfully without them.

@freakazoid OK, yes.

And, old hat to you, but the idea was to "locate on the Internet, by server and path":

... in a system literally designed by nuclear particle physicists.

Alternatively, L = I, "identifier".

Location == Identity.

Part of that remains valid. Part of it ... may not.

I've been kicking around the idea of a (local) document-oriented "filesystem" in which specifiers are effectively metadata descriptors or content-based keys.

@dredmorbius Yeah, I've thought about similar approaches. Directories aren't required to be listable. Unordered bags of KV pairs don't map super well to hierarchical paths, but it's not like that matters very much. For most people a filesystem interface wouldn't matter anyway; they want a browser-ish application.

@freakazoid It ... depends.

There are times you want a _very specific_ resource.

It's not just _content_ that matters, but ownership, provenance, who can / did change / modify it, etc., etc.

There are times when "what colour is the sky?" can be answered by any of thousands of references.

The fact that _approximate, content-described results_ are _sometimes_ or even _often_ appropriate doesn't mean _always_.

@dredmorbius Indeed, but I'm not talking about getting rid of URLs, and for such things search engines end up just acting as a URL directory, since you will look until you see the URL you want.

@freakazoid Shifting ground (and jumping back up this stack -- we've sorted the URL/URI bit):

What you suggest that's interesting to me is the notion of _self-description_ or _self-identity_ as an inherent document characteristic.

(Where a "document" is any fixed bag'o'bits: text, audio, image, video, data, code, binary, etc.)

Not metadata (name, path, URI).

*Maybe* a hash, though that's fragile.

What is _constant_ across formats?

@freakazoid So, for example:

I find a scanned-in book at the Internet Archive, I re-type the document myself (probably with typos) to create a Markdown source, and then generate PDF, ePub, and HTML formats.

What's the constant across these?

How could I, preferably programmatically, identify these as being the same, or at least, highly-related, documents?

MD5 / SHA-512 checksums will identify _files_, but not _relations between them_.

Can those relations be internalised intrinsically?

@freakazoid Or do you always have to maintain some external correspondence index which tells you that SOURCE.PDF was the basis for RETYPED.MD which then generated RETYPED.MD.ePub and RETYPED.MD.html, etc.

Something that will work across printed, re-typed, error/noise, whitespace variants. Maybe translations or worse.

Word vectors? A Makefile audit? Merkel trees, somehow?

@dredmorbius We have real world solutions for these problems in the form of notaries, court clerks, etc. I.e. (registered) witnesses. Trusted third parties, but they don't have to be a single party.

@dredmorbius In the RDF world I guess one doesn't sign the individual triple but the entire graph.

And it might make more sense to call these 4-tuples, because it's really "this person says that this object is related in this way to this other object".

@freakazoid So for 4-tuple:

1. Verifier
2. Object1.
3. Obejct2.
4. Obect1-Object-2 relation

"Signed" means that the whole statement is then cryptographically signed, making it an authenticatable statement?

@dredmorbius Exactly.

@freakazoid And, so:

Back to search and Web:

- The actual URL and path matter to the browser.

- They may matter to me. Some RoboSpam site ripping off my blog posts _might_ leave the content unchanged, but they're still scamming web traffic, ads revenue, or reputation, based on false pretences. I want to read my content from my blog, not SpamSite, even if text and hashes match.

@freakazoid The URL and domain connote to _trust_ and a set of relationships that's not front-of-mind to the user, but _still matters_.

Content search alone fails to provide this. And some proxy for "who is providing this" -- who is the _authority_ represented as creator, editor, publisher, curator, etc. -- is what we're looking for. DNS and host-part of URL ... somewhat answer this.

(Also TLS certs, etc.)

@freakazoid Right: authorities, certifiers, validators, auditors.

Some may verify _contents_, many only verify _process_. Some do detailed forensics.

The end result is a distributed web of trust over a fact or artefact being what it appears or claims to be. Which isn't always correct, but increases costs (and risks) of deception.

That will probably be at least a part of the system(s) I'm cosidering. There's some underlying need for either external authority or distributed concensus.

@dredmorbius I'd say just publish all the claims about the data, and let each person, node, organization, etc, decide which witnesses/publishers to trust. With tools to make that as easy as possible, of course.

@freakazoid The problem with "just decide who to trust" is that it becomes combinatorially expensive quickly.

Back to the URL/URI issue, and looking at DNS again, there's the notion of a DNS search list -- a set of domains (or subdomains) searched preferentially for an unqualified hostname.

That's a useful though somewhat inflexible approach to "how do I assign nicknames to resources frequently used?"

We don't address people formally by full names + patronyms + SSN. We say "Hey, Sean".

@freakazoid And ... there are levels of locality _and_ generality in trust.

If waifu tells me "lamp is broken, switch is burnt out", we have a close relationship, and I'm inclined to belive her.

But when I get to the lamp store the tech says "no, that's wired in series, there's a bad bulb". I give more credence to the tech's knowledge _even if they've not inspected the lamp_ than waifu.

Trust is complex and contextual.

@freakazoid Got it.

So in RDF: Subject - (Predicate) -> Object

"X relates to Y as Z".

As a 4-tuple:

"A _says_ that X relates to Y as Z".

Hash & sign, etc., etc.

@freakazoid I think, by the way, that this in part answers my question: is self-description possible.

No, it's not. _Some_ level of metadata (even if provided within the work itself) is necessary.

@dredmorbius FWIW word and phrase presence/frequency is self-description, in that it is verifiable without consulting a human. It's also useful for search, though it's generally not what humans care about directly even though it's what they search on; what they care about is the actual idea or thing they think documents having those words or phrases might be about.

@freakazoid Right.

I need to check on what state-of-the-art is, but based on tuples or ngrams of even short word sets (2-3, maybe 4), you can create an extensive signature of a text sampling within it. You can transform those to be constant against various modulations (e.g., ASCII7 vs. Unicode, whitespace, punctuation, ligatures, even common spelling variants/errors).

And then check an offered text against a known signature on a sampling of tuples through the doc.

This undoubtedly exists.

@freakazoid And for anyone following this:

I'm not an expert, though I'm interested in the area.

I feel like I'm staggering drunk in the dark. Some of what I'm describing is Things I Have Known for Five Minutes Longer Than You (or a few days). Some longer.

This is ... remote from most work I've done, though I've been kicking around ideas for a few years, and know at least _some_ of what I'm talking about.

Informed input / corrections welcomed.

@dredmorbius @freakazoid I'll state publicly that I appreciate the thinking going on in this thread!

I don't have any additional input to give.

@freakazoid @enkiv2 @dredmorbius @drwho
Yeah, I think I am...

@dredmorbius Regarding the ripping off of content, URLs only help with that to the extent that people pay attention to them, which they don't, even when typing in passwords and other secret information like credit card numbers.

@dredmorbius So the question I want to answer is how do we enable that kind of navigation, or something similarly easy to understand, without giving a whole bunch of power to a single entity? How do we leverage people's existing trust networks, or existing reputable (generally topic-specific) databases to provide results with at least as good of quality as Google's?

@freakazoid @dredmorbius
The idea we had with xanadu is that, because links are part of an overlay instead of embedded, people would send each other packs of links between different documents, and you might subscribe to a themed feed of links the way you subscribe to an RSS feed or follow an account. It was supposed to be p2p but could be federated -- but doesn't work if overlay links don't work (i.e., if content is mutable or addresses are)

@freakazoid @dredmorbius on the darknet, people are generally advised to use a TOFU model. Use the first link you find from a reputable source such as, bookmark it, and use a different one only if it is cryptographically signed by whatever entity controls the resource you are using.

@freakazoid A directory-path-based specification is saying "find this precise linked-list chain of directory specifications, with the implied properties of ownership, access permissions, modification history, provenance, etc., etc."

People looking for docs may allow slack. Software looking for libraries, somewhat less so.

And even humans looking for specific documentary authority may want a specific result.

@freakazoid The key for me is that _search is identity_, or at least _an identifier_, if _a search query_ returns _precisely one match_.

(Other options being "null" or "list".)

@dredmorbius I'm not sure I understand. It's possible for searches to return singleton results by accident. It seems like what you want is to distinguish between searchable metadata fields that uniquely identify resources and those that don't.

@freakazoid Right, that IS a problem, and a BIG one.

Possibly THE problem.

Q: Can documents be reasonably self-describing or self-identifying?

@dredmorbius I assume you mean *securely* self-describing?

Most distributed storage systems that try to defend against malicious nodes use exactly two types of keys, each self-certifying: content hash for immutable values and public key hash for mutable ones.

Beyond that you're into the realm of the subjective. My thinking here was to have signed triples ala RDF and use some kind of reputation system, i.e. web of trust, to decide which to trust.

@dredmorbius @freakazoid
Ad revenue is basically a way to use the web's (accidental) dynamicism as a monetization strategy. If monetization were based on permission to access, you'd save on hosting costs if you *only* gave permission & whoever happened to be around did the hosting (like serving password-protected items off bittorrent and selling the passwords).

@dredmorbius They use techniques like this to detect plagiarism. You can compute something like a Bloom filter for a document and then use Hamming distance to compare. That can work well as long as one is not intentionally trying to defeat it.

Of course, that assumes raw text. Once you get into complex markup, the markup can change the meaning of the document without changing what a text extractor will see. And then there's higher-bandwidth media like images, audio, and viceo.

@enkiv2 @dredmorbius Of course, now instead of pirating big files people will just pirate the passwords ;-)

An ISP startup I worked for back in '96 (InterNex, later acquired by Concentric which renamed itself to XO Communications using one of Internex's domains for customers) tried to make something like this. It was essentially DRM for arbitrary content that used a .exe wrapper that contacted a license server. I don't think they ever managed to even bring it to makret.

@freakazoid @enkiv2 @dredmorbius
Right, I'm imagining a world without piracy. (It turns out that if it's easier to pay, the first world will generally just pay, and piracy becomes limited to folks who wouldn't pay anyway.) What I'm describing is xanadu 'transcopyright' though -- but transcopyright in xusp, xsp, oxu, & xuc is based on one time pads for subdivision reasons so it doesn't save you any bytes.

@enkiv2 @dredmorbius Perhaps a better approach would be to separate funding and access entirely, the way it was for thousands of years?

@enkiv2 @freakazoid @dredmorbius
It saves you someeee because links and formatting aren't encrypted and also because (since documents are static) nobody's re-fetching. And also since new versions transclude from the old they wouldn't need to fetch twice for an update (but also would only pay for updated characters...)

@enkiv2 @dredmorbius If it's easy to pay people will pay, but then there's also a strong encouragement to put stuff that would otherwise have been free behind paywalls, like we see in app stores. I don't think "no piracy" is the goal we should be looking for. It's maximum value for humanity from creativity.

@enkiv2 Do you know / have you worked with Andrew Pam / Xanadu Australia?

I discovered his work w/ Xanadu in the closing months of G+.

@zardoz Right. TOFU's also long been used in PGP/GPG, and is arguably more widespread than the Web of Trust.

A widely practices mis-assertion of a key is likely to result in a public disavowal ... eventually.

For someone with a particularly high threat function / risk calculus, that's not attractive. And for most casuals, it's yet another idea that can lead to bad practices / poor decisions which might later be regretted.


@freakazoid Yes, this.

Another Brilliant Idea I had, to promptly discover far more able minds had arrived at it long before.


Finance it on tax-supported UBI, awards, grants, and bonuses, with supplemental income from performance and unit sales where appropriate.


@enkiv2 @dredmorbius Or to put it another way my goal is not to make sure that people pay to consume content but to make it so that people can make awesome stuff. A fixed payment per person or per use is about the crudest way I can think of to accomplish that. If anything it dramatically limits the utility of creativity, because even though it's nearly costless for additional people to benefit from it, unless they can or will pay the fixed price, they get nothing.

@enkiv2 @dredmorbius Likewise, there's a barrier to paying *more*. Especially since payment happens up front, before the payer has any idea what utility they will derive from the content. Far better to pay after the fact on a sliding scale. Sure, some will exploit that, and I think our aversion for that is what makes us accept such a shitty solution to begin with. But I think creators would get far more with such a model, especially since it helps eliminate middlemen

@enkiv2 @dredmorbius We know making it easy to pay reduces piracy, but we have never tried making it easy to pay after the fact, and especially not without middlemen. The current easy payment systems (Netflix, Spotify, etc) have huge inefficiencies even ignoring the "one price fits all" problem. It also leaves niche interests under- or fully un-served.

@freakazoid @enkiv2 @dredmorbius elementary OS is exploring good things this way in their AppCenter!

They heavily encourage, but do not require, payment & they provide a button for people to pay whenever they want.

@freakazoid @enkiv2 @dredmorbius interesting thread. If resources had a 'Suggested price' and consumption means 'Intention to pay' then afterwards payment could be below price w. e.g. max. 50% off (disappointed), on par or above price (cool stuff). Average payment then indicates 'Quality of resource': "N people payed X price". Consistently underpaying effects Reputation, risks losing access to resources.

@dredmorbius @enkiv2
Yeah. When I was working on xanadu he was maintaining the repos & shell access. I never worked closely with him, but I'm friendly enough with him & his wife.

@freakazoid @enkiv2 @dredmorbius
Well, the XU transcopyright model isn't globally fixed & there was the assumption that only a relatively small amount of content would actually be paywalled, but you're right that when an effective paywall exists there's incentive to put more behind it. The point was to set up distribution in such a way that less user-friendly DRM measures & stuff like individual takedowns couldn't be justified as easily.

@enkiv2 @dredmorbius I also think a Xanadu-like system would breakdown quickly without people with guns to enforce it. It's too much complexity for too little gain. Effort to police violations would almost certainly exceed the amount of value for the vast majority of works. Just like it does when people steal small creators' videos on YouTube. So you'd have a system that at best would only benefit large content publishers. No thanks.

@freakazoid @enkiv2 @dredmorbius
Yeah, transcopyright relies on the existing (government-enforced) copyright & licensing mechanisms. It's a hack on top of that to streamline shit, just like the GPL. It's, in my view, the least shitty one beyond abolishing copyright entirely.

@alcinnz @freakazoid @enkiv2 @dredmorbius
The minimum price, set as a percentage of suggested price, ensures creators at least earn something. The percentage could start low and dynamically increase (or decrease further) based on the value assigned by the consumers through their payment.

@humanetech @alcinnz @enkiv2 @dredmorbius I think the way to make sure creators earn something is to have something like a UBI, or otherwise make it so that one doesn't have to earn anything to live a dignified, healthy, happy, productive life. I think the only minimum price that makes sense is zero, because a) that's the cost of an additional copy, and b) there are a huge number of people who will benefit from a work who can't pay.

@humanetech @alcinnz @enkiv2 @dredmorbius Remember, the value to humanity of a creative work comes from its consumption. The value to any given individual is the difference between the value to them of consuming the work and the value of what they have to pay for it. The reason we enable creators to capture some of the value they produce is to incentivize them to create more. But we want them to create works with the most value to others.

@humanetech @alcinnz @enkiv2 @dredmorbius We also want people to create derivative works, and we want works that encourage derivation, because that multiplies their value. Excessive financial incentives derived from limiting access tend to reduce the amount of derivation by others, and it can cause excessive derivation by the creator who owns the original work in an effort to extract maximum value with minimum effort.

@humanetech @alcinnz @enkiv2 @dredmorbius So I think the optimal scenario is to create a culture of paying for creative works not based on the value one individually derives from them, but the value one feels humanity derives from them. And of paying at whatever point in time they can, not just right before or right after consuming it. It can be like tithing to the church, where the church is a global decentralized Patreon.

@freakazoid @humanetech @alcinnz @enkiv2 @dredmorbius

Or like a tax.

Which could also go towards other commonly-beneficial endeavors like research... and schooling... and ensuring the general welfare.

Imagine a government that supported all these things.

@woozle @humanetech @alcinnz @enkiv2 @dredmorbius Unfortunately "forced charity" tends to make people ungenerous, with the result that public goods tend to get underfunded. It also crowds out voluntary charity due to what I have been calling the "I gave at the office" effect.

There is also the issue that in a democracy only popular things get funded unless there is a very strong culture of experimentation and openness.

@freakazoid @humanetech @alcinnz @enkiv2 @dredmorbius

Generosity isn't an issue when the contribution amount is scaled to each individual's surplus income.

...and I suspect that what the *majority* want would definitely include artistic endeavors, as long as basic needs were also being met.

From what I can tell, voluntary charity is all but useless, so no harm if it is supplanted by a more robust system.

@woozle @humanetech @alcinnz @enkiv2 @dredmorbius I don't mean generosity in the form of how much people contribute, but in what they vote to fund and how much, and what they choose to fund outside of their taxes. I don't think having the government decide what creative works get funding is a great idea. Or even ignoring the question of government, any system by which things only get funded if the majority decide it should.

@freakazoid @humanetech @alcinnz @enkiv2 @dredmorbius

Well, we disagree there. In making collective policy decisions, the only alternative to government (in the broadest sense of the word) that I'm aware of is markets, and we know where that leads.

Mind you, government itself needs to be restructured from the ground up before it could be trusted with anything important.

@freakazoid @humanetech @alcinnz @enkiv2 @dredmorbius

A substantial part of our creative output is made with the express purpose of influencing others.

And, like this thread, not paid for by the reader.

If I may parafrase you:
The value to any given individual then is the difference between the value to them of getting others to consume the work and the value of what they have to pay for it.

@Jens @humanetech @dredmorbius @enkiv2 @alcinnz That's a really good point. I hadn't really been thinking about the value to the creator themselves of having others use their work. It especially gives an interesting perspective on Hollywood and the media since they have a lot of influence on our culture and politics directly through the works they produce.

@dredmorbius @enkiv2 @freakazoid YaCy isn't federated, but Searx is, yeah. YaCy is p2p.

@dredmorbius @enkiv2 @freakazoid Also, the initial criticism of the URL system isn't entirely there: the DNS is annoying, but isn't needed for accessing content on the WWW. You can directly navigate to public IP addresses and it works just as well, which allows you to skip the DNS. (You can even get HTTPS certs for IP addresses.)

Still centralized, which is bad, but centralized in a way that you can't really get around in internetworked communications.

@kick @enkiv2 @dredmorbius Not true; there are several decentralized routing systems out there. UIP, 6/4, Yggdrasil, Cjdns, I2P, and Tor hidden services to name just a few. Once you're no longer using names that are human-memorizable you can move to addresses that are public key hashes and thus self-certifying.

A system designed for content retrieval doesn't really need a way to refer to location at all. IPFS, for example, only needs content-based keys and signature-based keys.

@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.

If you request by IP, you'll get only the default / primary host on that IP address.

That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.

@enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.

@freakazoid Question: is there any inherent reason for a URL to be based on DNS hostnames (or IP addresses)?

Or could an alternate resolution protocol be specified?

If not, what changes would be required?

(I need to read the HTTP spec.)

@kick @enkiv2

@dredmorbius @kick @enkiv2 HTTP URLs don't have any way to specify the lookup mechanism. RFC3986 says the part after the // and optional authentication info followed by @ is a "registered name" or an address. It doesn't say the name has to be resolved via DNS but does say it is up to the local system to decide how to resolve it. So if you just wanted self-certifying names or whatever you can use otherwise unused TLDs the way Tor does with .onion.

@freakazoid Hrm....


There are alternate URLs, e.g., irc://host/channel

I'm wondering if a standard for an:

http://<address-proto><delim>address> might be specifiable.

Onion achieves this through the onion TLD. But using a reserved character ('@' comes to mind) might allow for an addressing protocol _within_ the HTTP URL itself, to be used....

@kick @enkiv2

@freakazoid Looking at RFCs 2068, 2616, and 7230

@kick @enkiv2

@freakazoid Answering my own question: no, there's not:

"As far as HTTP is concerned, Uniform Resource Identifiers are simply formatted strings which identify--via name, location, or any other characteristic--a resource."

@kick @enkiv2

@dredmorbius @kick @enkiv2 @ is already reserved for the optional username[:password] portion before the hostname.

@dredmorbius @kick @enkiv2 Right, but the very next section talks about HTTP URLs, which are much more narrowly defined.

It's not the protocol that's the issue, it's the http and https URI schemes.

@freakazoid "host" seems defined in ways that allows for a multitude of sins.

@kick @enkiv2

@freakazoid @enkiv2 @dredmorbius I said _really_. None of those are human-readable (unlike IP). Non-human-readable systems miss the point of the WWW, web of trust stuff is awful and doesn't scale. Human readability in decentralized addressing is a solved problem (more or less) for addressing systems, but there's nothing good implementing the solution yet, so little point.

@kick I'm with you in advocating for human-readable systems. IPv4 is only very barely human-readable, almost entirely by techies. IPv6 simply isn't, nor are most other options.

Arguably DNS is reaching a non-human-readable status through TLD propogation.

Borrowing from some ideas I've been kicking around of search-as-identity (with ... possible additional elements to avoid spoof attacks), and the fact that HTTP's URL is *NOT* bound to DNS, there may be ways around this.

@enkiv2 @freakazoid

@kick I'll disagree with you that WoT doesn't scale, again, at least in part.

We rely on a mostly-localised WoT all the time in meatspace. Infotech networks' spatial-insensitivity makes this ... hard to replicate, but I'm not prepared to say it's _entirely_ impossible.

Addressing based on underlying identifiers, tied to more than just content (I'm pretty sure that _isn't_ ultimately sufficient), we might end up with _something_ useful.

@enkiv2 @freakazoid

@kick Nodes of authority / trust, perhaps -- not centralised, but not fully distributed either. More hub-and-spoke than full-mesh, but a quite _extensive_ H&S system.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid I don't think it's that big of a problem (famous last words)? If you're playing with virtual hosts then a planned local network distribution set-up ala P9/Inferno could be set up quite easily to have initial (all?) connections go through a single box/host, couldn't you? I haven't read the HTTP spec since I was a child so I'm not sure if there's anything that'd prevent this.

@freakazoid @dredmorbius @enkiv2 You can buy individual IP(v6; kind of hard to get IPv4 these days) addresses; ISP is irrelevant. Still centralized, but human-readable and hard to take away.

@freakazoid @dredmorbius @enkiv2 Is ! still reserved (! may be a DNS thing actually, thinking about it further)?

@kick Sorry, not following.

"Virtual hosts" in the HTTP sense are simply HTML targets _on that webserver_ which are differentiated by the requested hostname (fully or partially qualified). Not in the virtual machine (Xen, VMWare, qemu, etc.) sense.

So local network distribution is irrelevant?

Not familiar with (Plan 9?) Inferno.

The question is how distributed hosts across the Internet can request HTTP resources via URLs, without DNS.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid IPv4 is completely human-readable if treated like phone numbers (though alternatively another way would be to map the available range of numbers to words, and autotranslate on the human end; humans can remember three words pretty easily). Kind of pushing it for English-speaking populations, though (English active-memory limit 7 things), I'll admit, but should be fine for the larger branches of the world that speak languages that can store more in active memory (e.g. cantonese at 10).

@dredmorbius @enkiv2 @freakazoid WoT doesn't scale for average users. Technical users it does. WoT doesn't work over the phone, for example, or on e-mail, because people are easily convinced that malicious actors are within their WoT in targeted attacks. This is going to get worse esp. with recent FastSpeech & Tacotron publications/code releases.

@dredmorbius @enkiv2 @freakazoid Ohh, I misinterpreted (bit tired; just got finished jogging in ankle-deep snow; thinking suboptimally). Assumed you meant over different boxes using same IP.

@dredmorbius @enkiv2 @freakazoid Hm, yeah, point ceded there for the most part.

@kick @enkiv2 @dredmorbius @freakazoid This body remembers when the definition of "geek" was someone who used a computer to exchange text chat messages to people. At least, that's what it meant at UCSC. Going back further, was it Augustine who was mightily impressed that Anselm could read without moving his lips?

@kick As of RFC 2369, "!" was unreserved. That RFC is now obsolete. Not sure if status is changed.

@enkiv2 @freakazoid

@kick To be clear, I'm trying to distinguish WoT-as-concept as opposed to WoT-as-implementation.

In the sense of people relying on a trust-based network in ordinary social and commerce interactions in real life, not in a PGP or other PKI sense, that's effectively simply _how we operate_.

Technically-mediated interactions introduce complications -- limited information, selective disclosure, distance, access-at-a-distance.

But the principles of meatsapce trust can apply.

@enkiv2 @freakazoid

@kick That is: direct vs. indirect knowledge. Referrals. TOFU. Repeated encounters. Tokenised or transactional-proof validations.

Those are the _principles_.

The specific _mechanics_ of trust on a technical network are harder, but ... probably tractable. The hurdle for now seems to be arriving at data and hardware standards. We've gone through several iterations which Scale Very Poorly or Are Hard To Use.

We can do better at both.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid Entirely unrelated because I just remembered this based on @kragen's activity in this thread:

Vaguely shocked that I'm interacting with both of you because I'm pretty sure you two are the people I've (at least kept in memory for long enough) read the words of online consistently for longest. (Since I was like, eight, maybe, on Kragen's part. Not entirely sure about you but less than I've checked for by a decent margin at least.)

@kick Clue seeks clue.

You're asking good questions and making good suggestions, even where wrong / confused (and I do plenty of both, that's not a criticism).

You're helping me (and I suspect Sean) think through areas I've long been bothered about concerning the Web / Internet. Which I appreciate.

(Kragen may have this all figured out, he's far certainly ahead of me on virtually all of this, and has been for decades.)

@enkiv2 @kragen @freakazoid

@kick @enkiv2 @dredmorbius @freakazoid oh, that's wonderful ♥

@kick And yes, I've been around under various guises for quite a while.

@enkiv2 @kragen @freakazoid

@dredmorbius @enkiv2 @freakazoid Do you have a proposed mechanical solution to get around the social problems that arrive with WoT? e.g.:

@kick A roundabout response, though I think it gets somewhere close to an answer.

"Trust" itself is not _perfect knowledge_, but _an extension of belief beyond the limits of direct experience._ The etymology's interesting:

Trust is probabalistic.

Outside of direct experience, you're always trusting in _something_. And ultimately there's no direct experience -- even our sight, optic nerve, visual perception, sensation, memory, etc., are fallable.

@enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid while I appreciate the vote of confidence, and I did spend a long time figuring out how to build a scalable distributed index, I am as at much of a loss as anyone when it comes to figuring out the social aspect of the problem (SEO spam, ranking, funding).

@kick Building off the notion that "reality is what, when you stop believing in it, refuses to go away", we validate trust in received assertions of reality through multiple measures.

Some by the same channel, some by independent ones.

Getting slighly more concrete:

Simulator sickness is a problem commercial and military pilots experience with flight simulators. The problem is the simulator lies, and visual and vestibular inputs disagree. Sims are good, not perfect.

@enkiv2 @freakazoid

@freakazoid @dredmorbius @kick @enkiv2 the amount of jockeying for cool parts of the namespace is smaller in IP, though. there's less speculation, squatting, etc. but maybe that's just because people don't use it this way right now

@dredmorbius @kick @enkiv2 @freakazoid building a non-distributed index has gotten a lot easier though. when I published the Nutch paper it was still not practical for a regular person to crawl most of the public textual web, from a cost perspective. (not sure if it's practical now, though, due to cloudflare)

@kick I don't know if you've ever dealt with a habitual liar, or someone whose mental processes are so disrupted that they can't recall, or recall incorrectly, or misrepresent past events (or present ones). It's tremendously disorienting.

Our own memories are glitchy enough that you start doubtiing yourself. Having a record (journal, diary, receipts, independent witnesses) helps hugely.

Getting to theories of truth, consistency and correspondence seem to work best.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid Cheater!

But yeah, a decent answer.

I do kind of worry about how fallible most WoT implementations are^1, but there definitely might be a way to do it, I’ll cede.

^1 Given that I as a random finance dork managed to reimplement the recent FastSpeech papers in ten days and get results decent enough to fool my SO when using it over a phone call (modern carriers started compressing call audio poorly when they internally moved to VOIP and the quality is pretty poor as a result), my confidence in what has previously been seen in a relatively decent way to verify (audio) has lessened slightly.

@kick Is a given narrative or representation *internally* consistent, or at least mostly so? And does it correspond to observable external realities (or again, mostly so)?

Mechanisms of trust generally try to achieve consistency or correspondence, sometimes both. In information systems, we tend to use one-way hashes, because those support the computational needs, but the hashes themselves are used to create a consistency or correspondence.

@enkiv2 @freakazoid

@kick I have been warning close friends and family members (some elderly and prone to dismiss technological threats and concerns as "nonsense" or "nothing I would want to use" or "beyond my understanding" or "but why would someone do that", v. frustrating) about DeepFakes and FastSpeech technologies.

I know that at least one has had faked-voice scam phone calls, though they realised this eventually. I'm predicting based in part on this, BTW.

@enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid right, you usually need the right Host: header. a useful debugging or preflight checkout trick is to add the IP/hostname pair to your /etc/hosts so you can connect to the server even if DNS gives you the wrong IP or no IP for it (perhaps because the name isn't a domain name). Also I have seen name-based virtual hosts used successfully with mDNS for zeroconf web servers

@zardoz @dredmorbius @kick @enkiv2 @freakazoid the best attack on the SEO problem I've seen so far is Wikipedia: Wikipedia's messy social processes are very good at not getting captured by SEOs and the like. Not perfect, but enormously better than Google SERPs

@kick So, in the "we have your dad hostage" situation, the scammer's failure was one of correspondence: dad was already dead.

But how you'd check this, *if you had the presence of mind to do so*, would be to attempt independent verification through other channels.

Call his number directly, or your mother's (assuming both are still alive and together), or current partner's. Ask to speak to him. Call the police, etc.

Falsehoods are common to any comms regime.

@enkiv2 @freakazoid

@zardoz @dredmorbius @kick @enkiv2 @freakazoid I guess the other alternatives along those lines are the Git model (fork at will, and choose whose fork you link to) and the Debian model (maintainers exist, and vote on governance, but NMUs are available to limit the worst failures of the maintainer model, despite the avconv/ffmpeg problem etc.)

@dredmorbius @kick @enkiv2 @freakazoid in infosec "trust" means "reliance" and isn't probabilistic. It's just a choice to give an entity the power to attack you. What's probabilistic and fallible is the possible benefits of that choice.

@kick If the channel (or medium) is a narrow one, and _not_ given to interrogation or ready validation, then you've got a harder problem.

You may need to call on experts. And we _have_ those for extand documentation classes -- people who validate books, or paintings, or recordings, or photos, or videos. They look for signs of both authenticity and deception.

See Captain Disillusion. Or art provenance.

Not perfect. But pretty good.

@enkiv2 @freakazoid

@kragen As with most words, there's a range of meanings. I'll admit to having pulled "extension of belief beyond the limits of experience" out of my hat, so it's not entirely standard. And that's "trust as a state of knowledge".

There's also the notion of "to put one's trust in (someone|something)", which can mean a binary rather than probablistic committment. We also have provisional or total trust.

Trust me, it's complicated.

@kick @enkiv2 @freakazoid

@kragen On the Git / fork model, there's a problem I've been trying to articulate for years and think I may finally have:

The threat of the low-cost / high-capability developer.

That is, even outside the proprietary world, it's possible to shape the direction of software (or protocol or data standards) development by being the most able / capable / low-cost developer.

That's been an issue in several notable projects, and seems more so now.

@zardoz @kick @enkiv2 @freakazoid

@kragen So whilst it's possible to fork, it can be hard to fork *and sustain a competitive level of development and support* especially against a particularly complicated alternative.

Say: browser rendering engines. Or init suite replacements. Or integrated desktops. Or office suites. Or tax or accounting software.

A vastly funded adversary *even if operating wholly within Free Software*, can code circles around other parties.

@zardoz @kick @enkiv2 @freakazoid

@kragen This goes back to the days of "worse is better" -- because "worse" is also (near-term) cheaper, and faster to develop, so it iterates and improves much faster than "better".

You may end up stuck in a local optimum as a result. But you'll at least get there quickly, while "better" is still trying to get their 0.01 out the door.

Otherwise: I tend to agree re: Wikipedia and Debian: social and organisational structures help tremendously.

@zardoz @kick @enkiv2 @freakazoid

@kick So back to "how would you prove..."

If you're operating in an edge case outside the ideals of the planned system, especially where the attacker prevents (or claims unavailable) reliable means of verification -- and controlling the flow of information is one of the oldest hacks in the book, see Sun Tzu "On the Use of Spies" -- then you're somewhat limited.

But you can try bypassing the suspect channel, or side-channel leaks through that, or testing for consistency.

@enkiv2 @freakazoid

@zardoz @kragen @dredmorbius @enkiv2 @freakazoid I typed a long reply to this (and the message above it) but decided to send someone an e-mail first to ask about something they're familiar with that's tangentially related to this; depending on what/if they reply I might respond with a few guesses.

@kick All of which would help you establish the truth of a claimed world-state.

Having to be constantly vigilant for such cases is _extremely_ tiring, based on my own experience.

We prefer operating in high-trust environments. Which itself is a likely adaptation -- if certain systems / experiences prove consistently low-trust, those with the option to do so will abandon them.

(Not all have that option.)

@enkiv2 @freakazoid

@kragen @dredmorbius @enkiv2 @freakazoid I think it would be? Given the people working at Cloudflare, it seems like they'd whitelist whatever you're crawling with if you asked the right person assuming it didn't become something everyone and their cat was requesting to do.

@dredmorbius @zardoz @kick @enkiv2 @freakazoid it sounds like you're saying that free software tends to be meritocratic and some people don't like that? or is it more that it's much easier to add complexity to a problem (e.g., HTML5) than to remove it?

@kragen I see a lot of this coming down to:

- What is the incremental value of additional information sources? At some point, net of validation costs, this falls below zero.

- Google's PageRank relied on inter-document and -domain relations. Author-based trust hasn't carried as much weight. I believe it needs to.

- Randomisation around ranking should help avoid systemib bias lock-ins.

- Penalties for fraud, with increasing severity and duration for repeats.

@kick @enkiv2 @freakazoid

@kragen @dredmorbius @kick @enkiv2 @freakazoid nah I think he means that an agency with a lot of funding(like for instance google) could just become the arbiter of all information by pouring labor into it.

@kragen - Some way of vetting new arrivals / entities, such that legitimate newcomers aren't entirely locked out of the system. Effectively letters of recommendation or reference.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid one of the nice things about PageRank is that the Perron–Frobenius theorem guarantees a well-defined result precisely because it has no penalties; penalties can give rise to the Eigenmoses problem, as described in

@dredmorbius @enkiv2 @freakazoid The "call directly" is a good technical solution, but I know someone personally who didn't think to do that when a _company_ called them, so I'm not sure how well that'd work assuming a _person_ (they were in perfect state of mind, just unaware that companies generally don't call you first and ask for PII).

Educating users is the most difficult social problem, especially educating them on things that they generally don't recognize as _aspects_ of the problem (like you pointed to when you mentioned the elderly calling things they don't understand "nonsense," for example).

As an example of technical users failing the basic "trust but verify," you can find a bunch of examples on HN of people saying things akin to "I use ProtonMail because they encrypt all of my e-mails!" which is easily disprovable (in the sense that they're intending, not in-transit encryption, which basically every modern provider has) just by sending a message to a non-ProtonMail box that doesn't have a key on keyservers and finding it completely readable.

@zardoz @kragen Yes, close to this.

It's the power of free, or at least low-cost.

Software development itself closely resembles network structures (and is a network of interactions between functions or processes). Water seeks the largest channel, electricity the lowest resistance, and buyers the lowest cost, software development favours capable development.

It's impossible to compete against a lower price:

- Features
- Momentum
- Mindshare
- Security
- Etc

@kick @enkiv2 @freakazoid

@kragen That's beyond my understanding.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid I do agree that author-based trust is pretty important

@zardoz @dredmorbius @kick @enkiv2 @freakazoid yeah, they kind of already did. the question from my point of view is how to change the rules of the game to keep them from creating barriers to entry that allow them to dollar-auction their way into net-negative social value

@kick People are stupid, yes.

I knew someone, years ago, who spent a week mad at her boyfriend because she'd mis-dialed his number, got a woman on the other end, and jumped to the conclusion that he was cheating on her.

That's ... a difficult problem to engineer around.

But we might be able to avoid some larger-scale consequences. The Podesta Test comes to mind.

@enkiv2 @freakazoid

@kragen A key problem with that is that current Web tooling makes it all but impossible to assess or even assert authority.

Hell, Usenet had this better in the mid-1990s with PGP-clearsigned messages.

We're a quarter goddamend century on.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid I've thought that it might be reasonable to bootstrap a friendnet by assigning newcomers (randomly or by payment) to "foster families" or "undergraduate faculties" to allow them to gain enough whuffie to become emancipated. ideally, gradually, rather than through an emancipation cliff analogous to legal majority or a B.S.

@kragen You'd likely have to undermine their business model.

On the positive side, this is a dynamic which can be used to play megacorps (and possibly other interests) off one another.

That notion goes back to IBM's Earthquake Memo, ~1998.

I'm not sure if you were at the LinuxWorld Expo where copies of that were being shown around, probably 1999, NYC.

Tim O'Reilly wrote on that in Open Sources.

@zardoz @kick @enkiv2 @freakazoid

@kragen Challenge on any such scheme is scaling quickly enough, relative to other systems.

Though if the founding cohort is sufficiently interesting, you'll have the reverse problem: too many people wanting in.

An inspiration I've long had for this is Lawrence Lessig's "signed by" convention at the ... Yale Wall, I think, described in "Code and Other Laws of Cyberspace".

That applied to anonymous messages, but for new users might also work.

@kick @enkiv2 @freakazoid

@kragen It's effectively a socialisation problem -- how do you introduce new members to a society?

But doing that *without* creating an inculcated old-boys/girls/nbs network, or any of the usual ethnic or socioeconomic cliques. Something that most systems have generally failed at.

Random assignments should help but aren't of themselves sufficient.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid Trump supporters label NPR as "fake news"; Trump opponents label Fox as "fake news". Presumably one side will win and the other will be penalized for linking to fake news, with increasing severity and duration or repeats. There's no particular reason to expect that it will be the correct side. See also: the Crusades, blood libel, babies ripped out of incubators, Lysenkoism. PageRank is immune to that.

@dredmorbius @kragen @enkiv2 @freakazoid How much privacy are you willing to sacrifice with this?

Taking a single possibility (I listed a few) from a thing I wrote to a couple of posts up-thread but didn’t send because I want to hear someone’s opinion on a sub-problem of one of the guesses listed:

Seed with trusted users (i.e. people submitting sites to crawl), rank preferentially by age (time-limited; would eventually wear off), then rank on access-by-unique-users. Given that centralized link aggregators wouldn’t disappear, someone throws HN in, for example, the links on HN get added into the pool, whichever get clicked on most rise up, eventually get their own ranking, etc.

This works especially well if using what I sent the e-mail to inquire a little more about: cluster sorting rather than just barebacking text (this is what Yippy does, for example, and what Blekko used to do), because it promotes niche results better than Google’s model with smaller datasets, and when users have more seamless access to better niches, more sites can get rep easier. Example: try vs. throwing your username into Google. The clustering allows for much more informative/interesting results, I think, especially if doing inquisitive searching.

Kragen mentioned randomly introducing newcomers (adding noise), but I think it might work better still if noise was added to the searches for at least the beginning of it. A single previously-unclicked link on the first five pages of search results?

@dredmorbius @kick @enkiv2 @freakazoid Well, we haven't really been working to fix the problem. Any of these problems. Well, maybe you have. But people like @Gargron are few and far between. Maybe we need better educational institutions.

@kragen True.

There's objective truth, and there's concensus truth. The two seldom match up.

Old Mr. Free Speech Hisself, John Stuart Mill, wasn't optimistic on the truth's capacity to out.

If it's necessary to set up competing credentialing networks which operate independently (competing churches?), that ... might have to happen.

Motivated irrationality is, unfortunately, A Thing. And can be quite lucrative and rewarding, at least in the short term.

@kick @enkiv2 @freakazoid

@kick As little as possible.

I've not participated online under my real name (or even vague approximations of it) for a decade or more. That was seeming increasingly unattractive to me already then. And I'd been online for at least two decades by that point.

Of the various dimensions of trust, anti-sock-puppetry is one axis. It's not the only one. It matters a lot in some contexts. Less in others.

Doxxing may be occasionally warranted.

Umasking is a risk.

@enkiv2 @kragen @freakazoid

@dredmorbius @zardoz @kick @enkiv2 @freakazoid I think it goes back longer than that; IIRC Gumby commented on the fsb list in the mid-1990s that he wasn't worried about other companies contributing code to GCC and GDB because Cygnus could then turn around and sell the improved versions to Cygnus's customers. Of course those customers could get the software without paying, but they found Cygnus's offering valuable enough to pay for, and competitors' contributions just increased that value.

@dredmorbius @zardoz @kick @enkiv2 @freakazoid the big insight Tim had, which took the rest of us a while to appreciate, was how this gave new market power to companies that own piles of data, like Google or the ACM or Knight Capital. And now we have AWS and Azure and Samsung capturing a big part of the value from free software instead.

@kragen Fair enough. "At least" to the Earthquake Doc.

Though that *specifically* laid out the policy of adopting an Open Source orientation for IBM specifically to compete more effectively against Microsoft and Sun.

Similarly: Netscape's assault against Microsoft, with browsers (and trying to break the desktop stranglehold), Sun's release of StarOffice, Google turning Microsoft's AJAX against MSFT via Gmail, etc., etc.

@zardoz @kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid human societies have hierarchies of prestige; we can't hope to eliminate those through incentive design. We can hope to prevent things like despotism, witch-burning, the Inquisition, the Holocaust, and the burning of the Library of Alexandria. But there's going to be an old-enbies network, unavoidably.

@kragen As I mentioned earlier: Virtually any monopoly I can think of can be described as a network.

The Usual Suspects are transport and communications. Markets are networks (nodes: buyers/sellers, links: transactions/contracts/relationships), politics (power brokers and relationships), information (knowledge as web, multiple contexts).

Most networks have more central nodes, those nodes become power centres as they amplify small applied effort.

@zardoz @kick @enkiv2 @freakazoid

@kragen The 1990s power nexuses were:

- Microsoft's per-CPU OEM licenses.
- Office market- and mind-share.
- ISV network and mindshare.

And at the server level, proprietary Unix.

Free software disrupted these, at least on the server, and eventually in the emerging mobile/handheld space. But new networks and centres emerged. Data, and ads, search, retail, and social networkss (Google, Amazon, Facebook).

Swapping monopolies isn't a win.

@zardoz @kick @enkiv2 @freakazoid

@kragen That's the Iron Rule of Oligarchy, so, yeah.

But we don't have to help them along any. And if we can figure out negative-feedback mechanisms to retard the process, so much the better.

@kick @enkiv2 @freakazoid

@kragen Incidentally, the Harvey Weinstein and Jeffrey Epstein stories have made me aware just how much wealth, power, and corruption are also fundamentally network phenomena. Something I've touched on in a couple of Reddit posts IIRC.

@zardoz @kick @enkiv2 @freakazoid

@dredmorbius @enkiv2 @kragen @freakazoid Privacy isn't just deanonymizing! You can also track pseudonyms.

@kick Right. My comments were aimed more at qualifying my interest in / preferences for privacy.

I'm finding contemporary society to be very nearly intolerable. And probably ultimately quite dangerous.

@enkiv2 @kragen @freakazoid

@kragen Weinsteinomics 101: Monopoly is fundamentally a control dynamic, not a marketshare proposition

...Harvey Weinstein and the Economics of Consent by Brit Marling is one of the more significant economics articles of the past decade, though I'm not sure Ms. Marling recognises this. In it, she clearly articulates the dynamics of power, and re-establishes the element of control so critical to understanding monopoly...

@zardoz @kick @enkiv2 @freakazoid

@dredmorbius @kragen @zardoz @enkiv2 @freakazoid You made a post earlier about economics being a religion rather than science, and I think it's relevant here.

@kick Yes.

That's a point I find from a few writers.

Robert W. McChesney, now in media studies but trained in economics, specifically makes that point in his books (Communication Revolution particularly:

Philip Mirowski's "More Heat Than Light".

W. Brian Arthur who notes that virtually all economics is policy rather than theory driven. There's little actual theory, much of it questionable.

@zardoz @enkiv2 @kragen @freakazoid

@kick @zardoz @enkiv2 @kragen @freakazoid All: I'm fading a bit here, back at it later.

@dredmorbius @zardoz @enkiv2 @kragen @freakazoid What convinced me of that view initially was how many economists intentionally and repeatedly make and encourage dimensional faults in comparisons and estimates.

@dredmorbius @freakazoid

Well, you are discovering the possibilities offered by alternative cyberspaces architectures, that could do just that. Automaticaly.

Many of you haven't clearly understood me when I have been talking of alternative cyberspaces architectures. I was really meaning, it, revisiting all fundamental concepts of our current "computer world paradigm". Including the notion of computer itself, banning client server model forever, etc...

This is raw & radical crypto-anarchism.

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid This is a very interesting thread you had, but reading it rapidely, none of you has envisionned that changing radicaly of cyberspace architecture was the solution. From what I saw, all your reasonning are still imprisonned by the current norms and standards imposed by the Empire for the current cyberspace architecture.

@stman @dredmorbius @zardoz @enkiv2 @kragen @freakazoid The last *n* posts have been about indexing information, which is a puzzle you're going to have to solve in any configuration of hardware.

The reason "hardware changes" were not considered before the posts on indexing were simply because it was outside of the problem-space. There are multiple people trying to go that route right now, and all of them aren't even scraping the bottom of the barrel in terms of progress.

I've only ever seen one good proposal for it, myself, and even it doesn't really work any longer without a major redesign due to how quickly networking and internet usage in general has morphed over the past decade and change.

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid

According to my cryto-anarchist studies on cyber-powers genesis, the architecture of all known technological layers and of a cyberspace architecture caracterize what I call the cyber-power model and which in turn caracterize the economical model.

The current statut quo is definitely pushing for the neoliberal surveillance capitalism model we have today.

But different cyberspaces architectures can have cyber-power models that lead to

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid

fully different economical models, and therefore, a radicaly different society.

Crypto-Anarchist like me are studying how, by changing those architectures, we can restore human rights, have a fully social and solidary society, ecologicaly and sustainabily driven, with alternative economical models and new forms of self governance handle by new forms of cybernetics of trust.

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid

To show you how I "think" differently, I could even ask the following question to Edward Snowden, I'm sure he would answer like most of you would, at least for now :

Let's take this simple question :

"How to fight mass surveillance ?"

Most of you would answer : By implementing end-to-end cryptography making mass surveillance very costy and therefore discouraging it.

That's the answer I get in 99% of the time.

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid

And here is what a crypto-anarchist situationist like me would answer to this question :

Mass surveillance is first a matter of telecommunication networks physical topology. By creating an alternative cyberspace architecture that would be a true fully P2P ledger physical network (You create physical links with all your physical neiboors), you make mass surveillance impossible first because there is a fully distributed physical network,

@dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid

making evedropping on each link by any organization simply impossible, then second, you by design invent protocols that are cyphered properly on this physical P2P networks.

Many folks tend to assimilate P2P to P2P overlay TCP/IP. This is a nasty limitation. Crypto-anarchists like me prefer native P2P networks, not only not relying on TCP/IP, but also having their whoosen physical topology, voluntarily something looking like a true P2P

@kick @zardoz @enkiv2 @dredmorbius @kragen @freakazoid

To experiment these new telecommunication networks P2P ledher physical topologies, all we need is this :

A long one, 50 cm long, and start making holes in the walls we have with all our neiboors when we live in flats to put some CAT5 ethernet cables. Then we need to invent a new native P2P protocol...

@kick @zardoz @enkiv2 @dredmorbius @kragen @freakazoid

Some say it is impossible to deploy, they are wrong, it's fucking easy. Then we need things like small RONJA's to jump between flats...

It's all at our reach, indeed. The only thing we lack is experimentation and organization to elaborate these new standards.

@stman @zardoz @enkiv2 @dredmorbius @kragen @freakazoid Oh, wait, did I get baited into responding to a parody account? (I genuinely can't tell at this point.)

@dredmorbius @kick @enkiv2 @freakazoid yeah, although in many ways it's an improvement over Golden Horde society, Ivan the Terrible society, Third Crusade society, Diocletian society, Qin Er Shi society, Battle of the Bulge society, Khmer Rouge society, Holodomor society, People's Temple society, the society that launched the Amistad, etc. We didn't start the fire.

@stman @dredmorbius @kick @zardoz @enkiv2 @freakazoid undoubtedly there are many more things we have not managed to imagine than things that we have managed to imagine, however much we would like to radically rearchitect cyberspace. What's your vision?

@stman @dredmorbius @kick @zardoz @enkiv2 @freakazoid it's a start, but it doesn't go nearly far enough; right now we lack trustworthy hardware, trustworthy operating systems, and norms discouraging the revelation of walletnyms, even on the internet, while meatspace is rapidly being covered by cameras and drones, not to mention MAC loggers and microphones.

@kragen @dredmorbius @kick @zardoz @enkiv2 @freakazoid Going to write down a text for you about it. Listing first a long list of issues we currently face in the current paradigm, and then drifting to the visions, I'd rather talk about directions, we should follow, and why. My long / middle term followers here know my speech and vision/directions , my analysis, how I worked, and it's impossible to resume in a few toots.

@kragen I'm referencing specifically the surveillance aspects, and the accellerating pace of that espeically over the past two decades or so. Though you can trace the trends back the the 1970s, generally.

Paul Baran was writing of the risks ~1966-1968, which is 52-54 years ago now.

IBM were actively demonstrating the risks 1939-1945.

Herbert Simon conveniently ignorant of this in 1978, when Zuboff discovered surveillance capitalism in her research.

@kick @enkiv2 @freakazoid

@stman @dredmorbius @kick @zardoz @enkiv2 @freakazoid I think Freifunk-like privacy-protecting mesh physical layers are a necessary ingredient, but by themselves they aren't sufficient either; and they introduce some new vulnerabilities that must be defended against.

@kragen @dredmorbius @kick @zardoz @enkiv2 @freakazoid All what I can tell you, is that I went deep, very deep into analyzing all the issues at hands... Just as an example, my last study was on the conditions to have and to maintain over time digital technologies fully demilitarized, but I've been studying many matters of that kind those last 5 years. Many friends are asking to summerize all this into a first

@kragen Of the various drawbacks of the Mongol Hordes, massive mobile technological surveillance was not a prominent aspect.

The Battle of the Bulge and Holdomor societies _did_ benefit from informational organisation. Khmer Rouge and People's Temple may have, and the capabilities certainly existed.

General capabilities began ~1880, again with Holerith, nascent IBM.

@kick @enkiv2 @freakazoid

@kick @zardoz @enkiv2 @dredmorbius @kragen @freakazoid

Yes yes, all what you say is true.
First of all, there are no progress because the Empire is persecuting and plotting with its spies on all those trying to take that route and organize. I know what I am talking about regarding this.

Then yes, we are definitely talking about a major redesign of everything. Because we ended demonstrating that there are no other solution that could do the job at all levels.

@stman @kick @zardoz @enkiv2 @dredmorbius @freakazoid there's a certain amount of backstabbing and plotting going on, yeah, but ultimately it's kind of futile; you can cut every flower but you cannot stop the spring. And what we have to face today is really tame compared to what, say, MLK Jr. or Solzhenitsyn faced, much less Spinoza or Galileo.

@kragen @dredmorbius @kick @zardoz @enkiv2 @freakazoid publication which unfortunately I had not the time to write down until now for personnal reasons. What I am saying here is that I al going to answer you, but that I am very serious about the research done those last 5 years on this topic.

@kragen @dredmorbius @enkiv2 @freakazoid It was better in the 1960-80s for the most part, but sometimes I still think of:

[5000 well thought out lines of a single mail response on how Linux wipes the floor with Solaris performance-wise >quoted]

Have you ever kissed a girl?

        - Bryan

So the problem was at least prevalent by ‘96.

@kick @zardoz @enkiv2 @dredmorbius @stman @freakazoid he's not a parody, I think, just young and struggling to figure out how to navigate a legitimately very complicated political landscape. I wish I could tell you how many of my friends have committed suicide, been interrogated by grand juries, been betrayed by those they trusted most, etc. It's a situation that's difficult for even the best grounded and most experienced.

@kragen @dredmorbius @enkiv2 @freakazoid That's not a joke, by the way, for the two people in the entire world who have never read that thread:!topic/comp.sys.sun.hardware/wCd7fHnzHjw%5B76-100%5D

@kragen @kick @zardoz @enkiv2 @dredmorbius @freakazoid Yes, all what you say is true, I fully agree.

I've progressively went to systemical approaches in my studies. The more I was studying specific matters or issues to solve, the more I was everytime more convinced EVERYTHING had to be rearchitectured fundamentaly. Even the "all turing machine" concept, as a limitation, has been studied and discussed with my

@dredmorbius @kick @enkiv2 @freakazoid depending on who you were and where you lived, it was easy to end up with very little privacy after the Mongol invasion. The fact that the technologies employed were things like chains and swords rather than punched cards and loyalty scores was cold comfort to the enslaved. But, yes, I meant that the societies were more regrettable overall, not necessarily specifically along the surveillance axis.

@kragen @zardoz @enkiv2 @dredmorbius @stman @freakazoid It's times like these where my mindset of "Eh, you can't win if trying to enact genuine change; you're probably going to die but it's probably worth it" works beautifully. I've never had to worry about that question: all of the people I looked up to politically were dead before I got to ask them any questions, so pretty easy to just write off as a fact of life.

Including, not-coincidentally, one person I've vaguely referenced in this thread RE: how human-readable, secure, decentralized naming systems are a solved problem.

@stman @dredmorbius @kick @zardoz @enkiv2 @freakazoid I look forward to reading it! I know how difficult it can be to labor in obscurity for years trying to figure things out, and how much work it can be to express the results well. And it's really hard to tell the humans things, because they don't understand them.

@kick @enkiv2 @dredmorbius @freakazoid not sure Dave Miller's privacy was being invaded there? much less in a technologically inescapable way

@kragen My evolving thought is that privacy is an emergent concept, it's a force that grows proportionately to the ability to invade personal space and sanctum.

Pretechnical society had busybodies, gossibs, evesdroppers, spies, and assassins.

But if you wanted to listen to or observe someone, you had to put a body in proximity to do it. Preliterate (or largely so) society plebes didn't even leave paper trails. A baptismal, marriage, and will, if you were lucky.

@kick @enkiv2 @freakazoid

@kick @zardoz @enkiv2 @dredmorbius @stman @freakazoid are you referring to Aaron Swartz's Bitcoin-inspired proposal of The Scroll?

@kragen We're at an age where a chat amongst friends, as here, is creating a distributed global written record, doubtless being scraped by academics, corporations, and state and nonstate surveillance systems.

US phone call history records date to the mid-1980s (if not before). Purchase, social, employment, and location records are comprehensive for at least the past decade, if not five or more.

@kick @enkiv2 @freakazoid

@kragen @kick @zardoz @enkiv2 @dredmorbius @freakazoid If I had to resume all our previous work in a single sentence, I think I would we completely liberated ourselves from current norms and standards, and with the way those scientific specialities were teached.

We even heavily discussed about teaching, the consequences of related scientific knowledge voluntary compartimentation on our ability to think differently

@stman @kick @zardoz @enkiv2 @dredmorbius @freakazoid I hope your results are correct! The further you stray from well-explored knowledge, the easier it is to spend years working on ideas that turn out to be wrong, in the end. The other day I was reading Gerard 't Hooft's wonderful page, "How to be a BAD Theoretical Physicist," and couldn't help but think of some episodes from my own childhood..

@kragen @enkiv2 @dredmorbius @freakazoid No, not Miller (I was referring to Bryan, because that post will never, ever be forgotten). I admittedly might have gotten lost (it's 6:00AM here and I haven't slept in two days, so I may have gotten threading messed up), but the connection in my head was -

Ah, yeah, I see what's up: I was thinking of a different thread with a similar set of people in it + @dredmorbius's line "I'm finding contemporary society to be very nearly intolerable. And probably ultimately quite dangerous." + comments RE: previous art of problem-space.

There's something that resembles danger in some manner when you can track everything a person's ever said with a name that can be paired with their home address pretty easily I think; lack of privacy mixed with full, unmutable history (for the bad parts, less so for the good parts) makes things very interesting nowadays.

@kragen @zardoz @enkiv2 @dredmorbius @stman @freakazoid Yep! I think there's some stuff he got wrong (which was just because of how new it was, really), but nothing fatal to the concept itself.

(Also, I thought the refutals were boring and weak.)

If privacy is the ability to define and defend limits on information disclosure, there is precious little left.

The information glut is so immense that even multi-billion-dollar-funded state intelligence apparatus cannot meaningfully utilise the information preemptively. And yet those same state actors leak and lose their own personnel and intelligence data. Political organisations have email leaked. Generals and possibly presidents are downed.

@kick @enkiv2 @freakazoid

@dredmorbius @kragen @enkiv2 @freakazoid Bingo, this is exactly what I was thinking when I posted that Cantrill quote.

@kragen The same state actors drop death on the sky based on cellphone metadata and other data traces.

And those are the ones we think of as the good guys.

China, Saudi, Israel, Russia, and who knows who all else, are doing far worse.

And we're only really a decade in to this brave new mobile-data-surveillance world.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid Well, privacy invasion was more typically done by your father, your husband, or your owner in many of these societies, rather than by the secret police. But it was in many cases quite pervasive. Of course when we think about medieval Europe, it's easier to imagine ourselves as monks, knights, or at least yeomen, than as villeins in gross, vagabonds, or women who died in forced childbirth, precisely because of that paper trail.

@kragen @dredmorbius @enkiv2 @freakazoid It's still done by all of those! Now it's just a mixed bag. Think about how much an adolescent risks if a guardian finds one of their social media handles (assuming they're doing anything interesting), for example.

@kragen @kick @zardoz @enkiv2 @dredmorbius @freakazoid

Ask @theruran ... he's been following and participating for long to these talks and studies... I just feel inconfortable right now because there is so much I'd like to say, and I have no paper ready yet to show you, only a long serie of posts on my TL on different social networks along these last years where many matters have been studied and discussed between like minded crypto-anarchist friends.

@kick That danger / risk is an interesting one.

Some people focus on strictly one element -- the State, or Corporations, or Terrorists, or Narcocriminals, or the Criminally Insane, or Griefers, or Stalkers / Exes.

It's kind of all of the above.

In some cases I'm not fully sure that it's simply having civic systems and rule of law which matter more.

But mostly it' the data, the ability to use and misuse it, or simply presuming data exist, that enables evil.

@enkiv2 @kragen @freakazoid


I am aware of this.

It's a persistent danger, in science, in general.

Genius like Einstein somehow lost himself with his attempts on his general theory. It's always the danger.

I think that all my work on cyber-powers and cyber-rights genesis, cyber-power models, and also my work on the origin of cyber-chaos is okay. My discovery of what I call the paradox of the current cyberspace architecture with the meatspace is also right.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid

@kick I've been kicking around the idea of manifestation vs. latency. Sociologist Robert K. Merton used the terms in context of _functions_, but they're fundamental to information.

Some is manifest: immediately apparent, graspable, understood in totality.

Some is latent: the opposite in every way.

Paired with benefits and risks, it means we value manifest benefit and discount *both* latent risk and benefit. It's a built-in short-termism.

Not by human nature.

@enkiv2 @kragen @freakazoid

@kick That's simply how information works.

So with pervasive recorded fungible, manipulable, queryable, records, on tremendous numbers of people, you don't know what future motives, contexts, norms, values, power structures, etc., will be.

The problem with Google's policy of getting right up to the creepy line, is that that creepy line moves.

So does the Surveillance Data Risk Line.

And we don't know what parts will move which way for what people and data.

@enkiv2 @kragen @freakazoid


I think I have managed to put cyber-chaos into equation and find its root causes, and now know how to fight it, and how to elaborate digital architecture that don't generate paradox with the meatspace.

@theruran has been following this work, and I think he can testify that some milestones were clearly reached.

But as said before, it will be impossible for me, right now, to resume you all the topics we've been discussing and studying.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid

@kragen @theruran

The paradox I have identified and I think solved, is the fact that, in the current paradigm, it is impossible to fight both, simultaneously, chaos in the meatspace, and cyber-chaos in the current cyberspace architecture.

I think we now know why, and therefore, how to correct this.

And it goes, mandatorily, through redesigning everything, because it is only a matter of architecture in all know technological layers involved.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid

@stman @theruran @kick @zardoz @enkiv2 @dredmorbius @freakazoid even if you've only identified the paradox correctly, it will be a significant contribution, quite aside from whether your proposed solution is the best solution or even a workable one

@kragen And yet, as the Chinese noted: Heaven is high and the emperor far away.

The inefficiencies of medieval systems (even highly-evolved bureaucratic ones as in China) left a great deal of latitude.

The lack of *material* wealth, or useful knowledge, imposed strong constraints. But the idea of being watched by unknown eyes, from anywhere on the planet, didn't exist. Your watchers were neighbours, and had profound limitations.

Still a threat, but knowable.

@kick @enkiv2 @freakazoid


I agree with this.

We know what needs to be solved now, and we're trying to find a way to do it, and my current proposal may not be valid, or may not be the only possibility.

Then, back to a question you asked me, for now, I am working on the concept of meta-cyberspace.

This is part of my proposal @theruran can testify about.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid Right. Today recreational marijuana is legal in California; 30 years ago it could end your career in many jobs, and even today it can get you executed in much of Asia. Who's to say what its legality or public perception will be in another 30 years? Similarly for abortion, divorce, adultery, job-hopping, capitalism, or opposition to global pervasive surveillance.

@kick Google's watching that line move in all kinds of ways.

Ways that impose $5 billion fines in Europe.

Ways that may be turning public sentiment against it in the US. Quite possibly harder and fiercer than happened to Microsoft in the 2000s.

Google's been openly mocked on Hacker News for at least five years, if not longer. Dang just commented that their A/B practices have him switching his habits. I'd changed mine in 2013, and don't regret it at all.

@enkiv2 @kragen @freakazoid

@kragen I think my periodic observations that numerous states within the US *still* don't have a legal minimum age for marriage annoys a fair portion of the Fediverse.

Moral values are profoundly fungible, over time. Sometimes in as little as a few years, but staggeringly so over decades and centuries.

I've reasons for believing we may be entering a period of higher flux in values serving as social identifiers, adopted as moral codes.

@kick @enkiv2 @freakazoid

@kick @enkiv2 @dredmorbius @freakazoid Much less so! In rich countries most women do have a room of their own, for example, and very few families will disown their children for premarital sex. Even gay sex is unlikely to result in fatal social sanctions in much of the world. Being kicked out of the house by your parents in your childhood is no longer a near death sentence. And of course many fewer people have owners at all, much less owners who can kill them at will with impunity.

@kragen Quite possibly in different direction in different locales, and not necessarily in a consistent direction over time even within given jurisdictions.

Drug (or sex, marriage, possibly business or technical) laws may swing wildly.

Where there's an overload of information, clearly evident, durable signifiers take on signalling significance, especially for group identity and loyalty.

@kick @enkiv2 @freakazoid

@kick @enkiv2 @dredmorbius @freakazoid OH. I see now. You weren't referring to what Bryan was doing to Dave or vice versa; you were referring to the fact that we are talking about it a quarter century later. Yeah, it seemed like a good idea at the time. 'course, at the time we were only a few tens of millions of Netizens.

@kragen On the other hand, previously one could travel, even a short distance, though also longer, and put much of that threat behind, starting over with a fresh identity.

That's ... extraordinarily difficult these days. Not unheard of, but it takes far more effort, risks and likelihood of being caught and exposed are much higher, and The System Never Forgets.

@kick @enkiv2 @freakazoid

@kick @zardoz @enkiv2 @dredmorbius @stman @freakazoid oh, yeah, clearly he was correct, even if NameCoin is not currently in wide use.

@kragen How many times have you been thankful you went to university in the age of film cameras, and prior to Facebook, Twitter, Snapchat Tik Tok, YouTube, Imgur, Reddit, ...

My Stupid Shit is at best recorded on a single frame of film, or a few fading memories.

@kick @enkiv2 @freakazoid

@stman @theruran @kick @zardoz @enkiv2 @dredmorbius @freakazoid also if it's more comfortable to you to write things up in a different language, it might happen to be one I know; technical French, Spanish, and Portuguese are pretty easy for me, and I can manage Italian with slightly more difficulty

@kragen @zardoz @enkiv2 @dredmorbius @stman @freakazoid Did you interact with him much beyond the time he mentioned you on his blog?

And yeah, definitely, though I think that Namecoin is a less than ideal solution for the most part (the parts where it deviates are the weakest).

@dredmorbius @kick @enkiv2 @freakazoid most people couldn't; banishment was tantamount to a death sentence unless there was a recently-genocided frontier nearby. even then, it meant you'd never see anyone you loved ever again. but there were intermediate levels. even if traveling from town to town in many epochs posed a high risk of being robbed, and the near certainty of being raped if you were a young woman, there were other times when it did not; and even if there was a risk, you might return


I agree. Still, in the mean time, the discoveries crypto-anarchists like me did do not invalidate his work at all, it is replacing it differently, in a more systemic, global approach, liberated from current limitations due to the fact most folks think solutions in the current paradigm, and sometimes find some, at least the begining or some PoC of them, but they cannot liberate all their potential due to the fact that they are limitated

@kick @zardoz @enkiv2 @dredmorbius @freakazoid

@kick @zardoz @enkiv2 @dredmorbius @stman @freakazoid we did some projects together and sometimes hung out in person. he gave me extremely valuable advice on several occasions

@kragen @dredmorbius @enkiv2 @freakazoid This is ignoring the recent past pre-network somewhat, isn't it? Before passports were required for international flight, people could more or less do this without consequence. Also, the minimal size of community probably made the prospect of not seeing loved ones again much easier even back in the oldest times.

@kragen @zardoz @enkiv2 @dredmorbius @stman @freakazoid I haven't felt jealousy in years, but this did it.

Still kind of frustrated that his request that the contents of his hard drives be released upon death wasn't seen through, but really given some of his more controversial opinions I can kind of guess possibly as to why.

@kick @zardoz @enkiv2 @dredmorbius @stman @freakazoid I'm sorry, I didn't mean to make you jealous. I was very lucky to know him. I'm frustrated about that too. On the bright side, I'm not dead yet; if things work out well, that will remain true for a significant period of time.

@kick @enkiv2 @dredmorbius @freakazoid when there was flight but no passports required for international flight, almost everyone either could not afford flight at all, lived in countries like the USSR that granted exit visas sparingly, or both. And international travel was itself riskier; my sister rode her motorcycle from the US to Argentina to visit me a few years ago, prompting the remark from my father that when he was her age, in the late 1970s, nobody would have survived attempting that.

@kick @enkiv2 @dredmorbius @freakazoid as for the prospect of not seeing loved ones again, I think the truth is rather the contrary: people today are much less close to their families than even a quarter century or half century ago, emotionally speaking. (For many of them that's a blessing, of course, but it still makes moving easier.)

@kragen @kick

There is an issue that was hard to see by the time Aaron was, it's how US/UK cyber-imperialism hegemonism fits in all this. The crypto-anarchist community had not enough maturity by that time to perceive it, how it forms, how it is maintained, what are the strategies and propagands done to do this.

We now see very clear on all these topics. This is why I was telling you that the new form of crypto-anarchism folks like me are now pushing

@zardoz @enkiv2 @dredmorbius @freakazoid

@dredmorbius @kragen @kick @enkiv2 The search for the optimal culture is a simulated annealing process and we're entering a "heating up" phase.

@kragen @zardoz @dredmorbius @kick @enkiv2 Unfortunately Wikipedia suffers from issues like that person who's been tirelessly editing the pages of media organizations and journalists in order to discredit them. At the end of the day there's no substitute for reputation and "editorial voice". I'd prefer known bias to unknown.

I still don't know how powerful this technique can be, though; once it's known maybe it's defused.

@kick @zardoz @enkiv2 @dredmorbius @kragen Many economists (especially Russ Roberts) agree that it is not a science. But like science it is a branch of philosophy, not religion. There are certainly plenty of folks who are quite dogmatic, but also many who are intensely curious and interested in finding better ways to describe and predict how people interact and make decisions.

@dredmorbius @kragen @zardoz @kick @enkiv2 One reason companies are able to out-develop non-commercial organizations is that they're more able to make it people's full time job. So the problem to solve here is funding. A UBI would probably do it, but I think there are other ways, mostly involving collectivization. Coding communes: pool resources and minimize people's cost of living.

@dredmorbius @kick @enkiv2 @freakazoid
Search-as-identity is one solution, but I prefer petnames -- a decentralized identity system for decentralized networks. If somebody wants to find something globally it's fine to rely upon something strict but unmemorable, but finding stuff that's already resident on your box or that your direct connections are sharing ought to be a personal or community affair.

@dredmorbius @kick @enkiv2 @freakazoid
Of course, one look at the state of computer security shows that for most cases (even very important ones) the social countermeasures are weaker than the technical ones. It's a lot easier to social engineer or rubber hose than to crack even a pretty weak password.

@freakazoid @dredmorbius @kick @enkiv2
Right. The biggest problem with both IP and DNS is that, one way or another, in order to get a piece of content, you need to ask for a machine. It makes sense to do this only if your main purpose is not providing content but providing services. Even then, there's a lot of complicated and unreliable infrastructure to transparently multiplex the one IP endpoint to multiple hosts without crashing the gateway...

@freakazoid @dredmorbius @kick @enkiv2
Just to point out -- URLs/URIs are W3C specced but aren't part of HTTP. (You guys know this but it's important to make this distinction here.) HTTP URLs are always over HTTP & so can't be content-addressed -- they're always host-based. But you can stick an SSB, IPFS, or onion address in an HTML anchor tag.

@dredmorbius @kick @enkiv2 @freakazoid
If you mean ! as in the routing control, isn't that even worse? We probably want to specify *less* irrelevant information by default.

@kragen @zardoz @dredmorbius @kick @enkiv2 @freakazoid
SSB is something worth looking at re: combining social & technical concerns. The network is not fully connected (even less so than fedi) & you have a kind of automatic/passive filtering through this disconnection (especially through, like, transitive blocking). Spammers have to actively be followed by trusted peers in order to broadcast.

@stman @dredmorbius @kick @zardoz @enkiv2 @kragen @freakazoid
Folks in this thread (& in their social circles) have been involved with theorizing about non-IP-based distributed store-and-forward. It's not really accurate to suggest we're not aware of it. (Other folks involved who should join this hellthread: @natecull )

@dredmorbius @kragen @kick @enkiv2 @freakazoid
Stafford Beer had some ideas about ways to rotate people through groups in such a way that ideas echo through a network. Based on graph theory & permutation. I've forgotten the name. Worth looking into as a way to grow/integrate folks into a large group by making connection in a smaller one & getting mirroring/feedback.

@kragen @dredmorbius @kick @enkiv2 @freakazoid
In the absence of any negative feedback, whoever can produce the most positive feedback will win (and when competing on access to information, winning accumulates). Whoever gets an early monopoly has a lot of control over the worldview even after they lose that monopoly...

@dredmorbius @freakazoid @kick @enkiv2
Earlier RFCs had defined meanings for the parts of HTTP URLs, but vendors ignored the standards so now URL paths are just an arbitrary string which could mean anything.

@mathew I think this discussion hinges more on the host part, and what it might reference other than DNS as an HTTP (or HTTPS) protocol reference, so as to break from the DNS oligarchy.

An alternative is to define other protocol references, as with, say, doi://, which address specific content.

There's the PURL concept of Internet Archive.

And how to create a self-sustaining decentralised namespace is challenging.

@freakazoid @kick @enkiv2

@enkiv2 Pretty much this.

It's an evolutionary problem, I think, with likely analogues and lessons in biological evolution.

Negative feedbacks are fitness checks?

@dredmorbius @freakazoid @kick @enkiv2 Back even further, the plan was that the web would eventually use URIs, which would be dereferenced to fragile URLs. But the host-independent transport layer never happened because one-way links that break were "good enough". URIs only really survived in the DTDs.

@mathew More on "why" would be interesting.

Insufficient motivation?
Sufficient of resistance?
Excess complexity?

@freakazoid @kick @enkiv2

@dredmorbius @freakazoid @kick @enkiv2
I think (a) it's hard to do, (b) if you do it right the user never notices versus bells and whistles like <blink> and <marquee>, and (c) the web exploded really quickly and it was impossible to even get all browsers to render the same HTML, let alone all introduce a new transport layer.

@kragen Defining "network" in this context may help:

A collection of nodes and links, between which _something_ flows; matterial, energy, information, forces, people, relationships, money.

Characteristics are size (nodes, links: 0, 1, 2, ... many), topology (unary, peer, chain, ring, star, tree, mesh, compound), throughput, permanance, directionality (directed, nondirected), protocols & formats, governance.

Common & distinctive properties emerge.

@zardoz @kick @enkiv2 @freakazoid

@enkiv2 Bang simply as available notation. Now that I think of it, it might make a good routing _mechanism_ specifier:



Again, I'm not sure this is better than individual protocols.

Another option would be to specify some service proxy, which could then handle routing. URI encoding doesn't seem to directly provide that, apps/processes define own proxy use.

@kick @freakazoid

@enkiv2 Right.

Part of my question is whether HTTP URIs can be repurposed / adapted / abused, or if a new protocol(s) need to be specified.

Migration path / cost is lower with reuse, but that's not _absolutely_ essential.

@freakazoid @kick

@enkiv2 Also, "host" can be abused in all kinds of interesting ways -- a host that accepts search parameters and forwards to matching content, e.g.

(A search engine in "I'm feeling Lucky" mode, say.)

@freakazoid @kick

@enkiv2 Which is another way of saying that social engineering and rubber-hoses are low-cost search / goal-attainment paths.

@kick @freakazoid

@enkiv2 SAI and petnames are two points in a space (not sure if 1D or n-dimensional).

Search utilises characteristics which may be internally-specified (content, transforms) or extenal (metadata, assigned identifiers).

Petnames are locally-assigned non-global identifiers. They may be _shared_ among some group, but they're localised, folksonomic, nonauthoritative.

(Though local names can become global with time/use/convention.)

@kick @freakazoid

@freakazoid Absolutely. Commercialism's capacity to moblise resources is phenomenal.

Early work on Free Software as an organisational model (see Coleman's and O'Mahoney's works, among others) suggested FS/OS was an organisational model which could displace traditional propreitary SW dev. And in some cases it has.

Others not so much.

And it can be *adopted* by commercial enterprises (or govs, edus, orgs) as well, combining capital + FS/OS.

@kragen @zardoz @kick @enkiv2

@kragen @zardoz @enkiv2 @dredmorbius @stman @freakazoid Ha, don't be sorry! I kind of intended for it to sound ridiculous, anyway: people aren't toys or anything, so being jealous that one person got to interact with someone else is inherently comical to me, somewhat.

@kragen @enkiv2 @dredmorbius @freakazoid US to Argentina is a lot more difficult a trek than US to Canada ever was, isn't it? My view here may be a bit influenced by all of the people I know who fled the US for Canada before the border was actually maintained in any way other than symbolically, so I figure I could be biased, here.

Agree with most of the rest of that, though.

@enkiv2 @kragen @zardoz @dredmorbius @freakazoid SSB is really fascinating!

@dredmorbius @mathew @freakazoid @enkiv2 Lack of competence! (at least partly.)

I think it's startling how much of technical history is due to people with better ideas being entirely incompetent.

@dredmorbius @freakazoid @kragen @zardoz @enkiv2 One of the biggest things inhibiting libre software from taking over the world was takeovers, I think. Can you imagine how different things would seem if every outfit that resembles Cygnus even vaguely wasn't scooped up by Red Hat?

@freakazoid The term is ... slightly ... exaggerated.

But you have elements of:

- A Received (or Revealed) Knowledge.
- An Annointed Priesthood.
- Sacred Texts.
- An exceedingly close relationship with Power.
- Ideological Purity Tests.
- A large Propaganda Arm.
- A strong resistence to actual empirical knowledge, most especially from the sciences.
- Routine rubbishing of dissident thought.
- Numerous True Believers.

The descriptions not far off.

@kick @zardoz @enkiv2 @kragen

@freakazoid Compared to other sources, Wikipedia's biases are far more known.


(Applies also to earlier economy-as-religion discussion.)

@kragen @zardoz @kick @enkiv2

@kragen Since the time periods and regimes we're discussing seem rather vaguely defined:

- When I spoke of modern surveillance society being near intolerable, I'm contrasting it with my own personal experience of the relatively recent past, say, life since 1970.

- More broadly, there's been a recent history of high mobility starting roughly 1800 - 1850 (corresponding largely with industrialisation and motorised sea and land transport), through about 2000.

@kick @enkiv2 @freakazoid

@kragen Travel freedoms weren't complete, but were _extensive_.

Modern passport controls began roughly in WWI.

Ethnic emigration controls existed, though were successively lifted largely ~1920 - 1970 in many areas.

*Internal* migration within nation-states was extensive, e.g., the Great Migration, Westward Migration, Dust Bowl migration, Rust-Belt to Sun-Belt, Brooklyn-to-Miami, California migration ~1930 - 1980, and general rural-to-urban and core->suburb flight.

@kick @enkiv2 @freakazoid

@kragen You also had criss-crossing transatlantic flows, blacks out of the United States, jews in, in the early-to-mid 20th century. Much movement throughout British Commonwealth states. Huge movements throughout Europe.

Generally: an ability from 1800 - 2000 of picking up, moving elsewhere, and starting over again, throughout large (and for that time an expanding) part of the world.

And tracking was ... limited.

Passports and driver's licences: paper-based.

@kick @enkiv2 @freakazoid

@kragen Some banking records and the like.

And the precursors of modern credit bureaux: Dunn and Bradstreet dates to the 1800s (the increased mobility made tracking reputations more important). The first modern novel on con-men, as opposed to mere tricksters, Melville's "The Confidence Man", is set on the high-mobility throughway of its time, the steamboat-traversed Mississippi River. Mobility and distance communications opens new avenues of fraud.

@kick @enkiv2 @freakazoid

@kragen But for the average person, *with the ability to travel*, one that was *widly* available 1850 - 2000, you could, for the most part, get up, transfer, and leave your past behind.

Not perfectly. But as a real possibility.

That ... seems far less possible now, taking a static read. More troubling is the trend, which looks strongly exponential, suggesting the near future will not resemble a decades-to-centuries distant past much at all.

That's my argument.

@kick @enkiv2 @freakazoid

@kick I was just listening to a interview with Safi Bahcall on "Loonshots"

On org-behaviour phase-shifts. Why some organisations are creative, some hidebound.

You can think of this as coming from competing forces, much as with solid-liquid phase transitions (binding energy vs. entropy). And transitions can occur rapidly.

The motivators for creating _and_ adopting standards is likely similar.

@enkiv2 @mathew @freakazoid

@kick And it's not merely competence. Much of it is mastery across a range of skills, including marketing, organisational leadership, fundraising, fighting off (or neutralising) legal and business threats, etc.

"Capitalism as the engine of innovation" suffers massively from Texas Sharpshooter fallacy, and ignores many souls it destroyed or ignored. Aaron Swartz, Ian Murdoch, Ted Nelson, Doug Englebart, Paul Otlet, Rudolph Deisel, Nicola Tesla, Filo Farnsworth...

@enkiv2 @mathew @freakazoid

@dredmorbius @enkiv2 @mathew @freakazoid Nelson was who I was thinking of when I said "incompetence," actually.

Your statement makes me want to ask, though: how was capitalism responsible for the death of Murdock? That seemed to be strictly a police violence problem; he was making millions.

And Swartz's case, while indirectly caused by capitalism, seemed to be more caused by the state. (JSTOR pulled out quickly while MIT and the Fed insisted on pursuing.) One could argue I guess that his ideas were kind of neglected, but interestingly he seemed to have a lot of success with them as he got later in life.

@kick "The State" is an extension of the capitalist arm, to an enormous extent. It always has been, and you can read much of Smith's"Wealth of Nations" as addressing that specifically. "Wealth, as Mr Hobbes says, is power."

Lessig's project of the past decade:

@enkiv2 @mathew @freakazoid

See Jane Mayer's "Dark Money", or Orestes' "The Merchants of Doubt". There's the 1937 analysis of Establishment opposition to innovation, Bernhard J. Stern's "Resistances to the Adoption of Technological Innovation"

As Markdown, thanks to yours truly:

@enkiv2 @mathew @freakazoid

@dredmorbius @enkiv2 @mathew @freakazoid I've read Lessig's thoughts on this, and while I mostly agree with them, it doesn't seem to be relevant in the Murdock case (which seemed like it wasn't premeditated), if nothing else? Thanks for the other links; will read soon.

@kick There's the extraordinarily long history of oppression of the Small by the Large through the instrument of Government: anti-abolition, anti-union, anti-sufferage, anti-worker-safety, anti-environmental-regulation, anti-public-domain, anti-free-software, anti-cryptography.

Not undertaken at the bequest and pleas of the vast majority of the population.

So "state" and "capitalist" are not fully distinct.

@enkiv2 @mathew @freakazoid

@kick Ted Nelson may well suffer from technical and organisational handicaps. He certainly cops an attitude (and reminds me of a few adgacent conversational participants in this and some related threads).

But he has some Big Dreams, and dreams which have a long legacy (Paul Otlet, whom I've just discovered, being another early pioneer). The overall mission is one tend to agree with, if perhaps not Xanadu's specific approach.

The goal has powerful enemies though.

@enkiv2 @mathew @freakazoid

@kick The cases of Murdoch and Swartz are slightly different, but in general: people with a demonstrated enormous talent *and* a goal of direct social benefit were attacked and/or abandoned by the instruments of their own society.

Carmen Ortiz, Steven Heymann, Michael Pickett, M.I.T., JSTOR, M.I.T. President L. Rafael Reif, and others in the prosecution chain of command are complicit in Swartz's murder. They drove him to it in all deliberation.

@enkiv2 @mathew @freakazoid

@kick And the proprietary academic publishing industry must be destroyed, in Swartz's name.

It will be.

@enkiv2 @mathew @freakazoid

@kick Murdoch also suffered mental health issues. He'd done well, but as with many technological pioneers, saw hugely uneven success.

At a time when he was in crisis, and quite evidently and obviously so, the system entirely failed him.

As it does so very, very, very, very many.

Sucks out all they've got to give, then spits them out.

@enkiv2 @mathew @freakazoid

@kick The specific question of *how* you create _and maintain_ a state to serve the greater public good is a complex one dating to the earliest written histories.

I'm not _against_ the state. I'm not an anarchist. That simply creates a vacuum for Power to move into.

This will _always be_ a constant struggle.

But an appropriately-structured system, with checks and balances, multiple entities, and strong checks on unlimited power, should be possible.

It's work.

@enkiv2 @mathew @freakazoid

@dredmorbius @enkiv2 @kragen @zardoz @kick @freakazoid
Yeah, SSB = scuttlebutt. It's an incredibly interesting protocol and community with really vital discussion about norms and community management with a kind of vaguely left-libertarian flavor, hobbled by a couple specific technical problems that make onboarding & setup hard & make it tough to implement clients that aren't electron apps.

@dredmorbius @enkiv2
well, anything that decelerates runaway feedback loops

@dredmorbius @enkiv2 @kick @freakazoid
Bang was used in usenet addresses to separate a series of hosts in order to specify a routing, since UUCP would be done by machines calling specific other known machines nightly over landline phones. You'd see bang routing in usenet archives as late as the early 90s. I'd be surprised if it's not still theoretically supported in URLs.

@dredmorbius @enkiv2 @freakazoid @kick
It can be, & web tech does. But they're all single points of failure. If there's host-based addressing at all, then there's always a machine that needs to stay up forever or else your data is inaccessible.

@dredmorbius @mathew @freakazoid @kick @enkiv2
A URL/URI distinction (with permanent URIs) would mean having static content at addresses & having that be guaranteed. There wasn't initial support for any guarantees built into the protocol, & commercial web tech uses relied upon the very lack of stasis to make money: access control, personalized ads, periodically-updating content like blogs, web services (a way to productize open source code & protect proprietary from disassembly).

@enkiv2 Right.

Though my question was, specifically: are negative feedbacks fitness checks? That is, the "selection" process within "variation, inheritance, and selection".

And vice versa: are fitness checks / selection processes negative feedback?

Not sure that they are or aren't. Musing on this.

Within a systems context, yes, negative feedback is required for sustainable function.

@kick @enkiv2 @dredmorbius @mathew @freakazoid
Having worked for Ted -- I would agree only in specific constrained ways. However, throughout the 80s the technical end of Xanadu was being run at Autodesk with managerial control ultimately with John Walker (Ted was not in the picture), & everybody involved during that era was hyper-competent by 80s software dev standards. Drama over a late redesign by Mark Miller (now a VP of something at Google) kept xu88 from shipping on time.

@enkiv2 Email also.

I used (though understood poorly) bang-path routing at the time.

So yes, I'm familiar with the usage and notation. The question of whether or not it's appropriate here is ... the question.

At present, HTTP URL's *presume* DNS.

The problem is that DNS itself is proving problematic in numerous ways, that ... don't seem reasonably tractable. The dot-org fiasco is pretty much the argument I've been looking for against the "just host your own domain" line.

@kick @freakazoid

@enkiv2 That's at best worked with difficulty for large organisations -- domain lapses, etc., occur with regularity.

Domain squatting, typosquatting, and a whole mess of other stuff, is a long-standing issue.

In that light, Google's killing the URL _might_ not be _all_ bad, but they've been Less Than Clear on what their suggested alternative is. And I trust them less far than I can throw them.

For individuals, the issues of persistent online space is a huge issue.

@kick @freakazoid

@dredmorbius @enkiv2
elimination of options based on failure of fitness checks certainly is a subset of negative feedback. i'm not assuming that the negative feedback in question is non-arbitrary though. it's just that in the absence of any negative feedback, everything goes positive, and whoever has the largest reach cannot be beaten. with negative feedback a powerful actor can be deplatformed by a coalition.

@enkiv2 Then there's the whole question of how many spaces is enough. There are arguments for _both_ persistence _and_ flexibility / alternatives, and locking everyone into a _single_ permanent identity generally Does Not End Well.

The notion of a time-indexed identity might address some of this. Internet Archive's done some work in this area. Assumptions of network immutability tend to break. In time.

@kick @freakazoid

@dredmorbius @enkiv2 @mathew @freakazoid Fully agree with the second paragraph, my disagreement in the initial comment was that I'm under the belief that JSTOR did _less_ harm (they still did a lot of harm) than the other parties (they dropped their case pretty much immediately). But overall I agree, yeah.

@enkiv2 @kick @dredmorbius @mathew @freakazoid
Ted is not a programmer (but is really good at reasoning about algorithms & data structures). His ADHD makes him a less effective manager. Since 1990, everybody working under him has been a volunteer & no Xanadu-branded project has had a team of more than 2 devs except under XOC.

@dredmorbius @enkiv2 @mathew @freakazoid Progress has definitely been made! There's only been a single paper that I've had trouble accessing in the last few months, despite having no legal access to papers. Can't wait until the system collapses further.

@dredmorbius @enkiv2 @kick @freakazoid
Yeah. Any immutability needs to be enforced because when the W3C declared that changing web pages is Very Rude all the scam artists & incompetents did it anyway. Content archival projects like waybackmachine become easier if you have static addresses for static content & some kind of mechanism to repoint at a different set of static documents (like IPFS+IPNS).

@enkiv2 @dredmorbius @freakazoid This is why I brought up bang paths, yeah. Very cool bit of history!

@enkiv2 @dredmorbius @freakazoid (also e-mail addresses rather than just usenet addresses, though the distinction is fuzzy anyway.)

@enkiv2 It's also a transition path, which addresses another element of this question.

If we're looking at coming up with a DNS-independent addressing scheme, then operating a set of reflectors, relays, or gateways (similar to Usenet-Email, Usenet-Web, or Internet-BBC gateways), might offer a path.

The relays _might_ be an online infrastructure, including a distributed one (in both IP and namespace) _or_ a locally-provisioned one as an HTTP or Tor proxy.

@freakazoid @kick

@enkiv2 The advantage is in being able to partition the URL into the DNS-dependent and -independent elements.

The proxy is DNS-dependent (though you can override it locally). The content, metadata, role-based, or other location-independent scheme is passed on to the proxy.

This gives a backwards-compatible path from the Old Web to the New.

And on the New Web you'd have the location-independent addressing as standard.

Meantime You Can Get There From Here. Which helps.

@freakazoid @kick

@dredmorbius @enkiv2 @freakazoid @kick You can try to fight this way, but you're loosing your time. A global sihft of paradigm in terms of cybersoace architecture is necessary. Still, by the mean time, we can. find clever trucj to fuck them but, according to me, this should never distractvus from building our own standards and alternative. cyberspace architecture. We're ahead microsoft. In termscof concepts. Never loose that of sight.

@enkiv2 So, no, you _don't_ need content permanently at addresses.

You only need a persistently accessible _gateways_ to URI-referenced content, much as you're already starting to see through nascent schemes such as DOI-based URIs for academic articles, e.g.:


Web browsers don't yet know what to do with that. A DDG bang search, Sci-Hub, or should though.

Other content-based addressing methods likewise.

@mathew @freakazoid @kick

@kick I'm (trying to) rereading the MIT report on the incident.

That's also rage-inducing.

@enkiv2 @mathew @freakazoid

@kick There's a huge back-archive that's still hard to find.

Though the situation's getting vastly better.

Eventually the (surviving) publishers will turn to a public-goods model, tax-supported, because it's the only way they can exist. And I'm talking _all_ publishing substantially.

Academic: revert copyrights to authors, publish through Universities, as it was previously.

@enkiv2 @mathew @freakazoid

@dredmorbius @enkiv2 @mathew @freakazoid @kick
This lets us keep HTTP for transport through a hack but I'm not sure how useful that is in a world where IPFS, DAT, and bittorrent magnet links all exist & are mature technologies. (Opera has supported bittorrent as transport for years, & there are plugins for IPFS and DAT along with fringe browsers like Brave that support them out of the box.) HTTP has already been replaced by HTTPS which has been replaced with QUIC in most cases now...

@enkiv2 I'd argue that there's a place for redacting content -- see the Bryan Cantril thread from 1996 previously referenced. That's ... embarassing. Not particularly useful, though perhaps as a cautionary tale.

There's a strong argument that most social media should be fairly ephemeral and reach-limited.

There are exceptions, and *both* promiting *and* concealing information can be done for good OR evil.

@kick @freakazoid

@enkiv2 @dredmorbius @mathew @freakazoid @kick
In other words, in terms of getting widespread support for a big protocol change, the killer isn't compatibility with or similarity to already-existing standards like HTTP but, basically, whether or not it ships with chrome (and thus with every major browser other than firefox).

@dredmorbius @enkiv2 @kick @freakazoid
In terms of negative feedback -- I don't consider redaction of already-published material to be the best or most useful form. We see problems that could be solved by this, if mirroring & wayback machine & screenshots didn't exist. I'm more hopeful about solving the dunking problem with norms.
Reach is a lot more nuanced & powerful. Permanent & reach-limited like SSB feels like the right thing for nominally-public stuff.

@enkiv2 @dredmorbius @kick @freakazoid
(Secret stuff is a different concern. Encryption gets broken. Accidentally leaking secret info publically is a problem but giving up all of the benefits of staticness -- mostly making decentralization viable -- won't solve the whole problem and also IMO isn't worth it for the few cases it does resolve.)

@enkiv2 For ordinary citizens, the ability to unpublish / recall content seems fair -- that's the EU's RTBF.

For organisations, governments, highly significant individuals, criminals, and others with significant social obligation or power, the ability to capriciously unpublish is much more problematic.

The nature of online communications makes what were previously _streams_ into _records_, which can have tremendous durability. Everthing needn't last forever.

@kick @freakazoid

@enkiv2 @dredmorbius @zardoz @kick @freakazoid what are the technical problems with SSB? I've been trying to figure out where to find a straightforward explanation of the protocol at, like, the level of RFC 821.

@dredmorbius @kick @enkiv2 @freakazoid yeah, I mostly agree. Most people couldn't fly, but more than a third of them could travel, and more than half could travel with their families. And there was no real way to track people.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid I'm not sure; I think GCC usage has gone up dramatically since 2000, and it's mostly been Clang rather than icc or VC++ that has been the competition. Certainly Cygnus's owners *expected* the RH purchase to increase Cygnus's power, not decrease it.

@kick @zardoz @enkiv2 @dredmorbius @freakazoid well I guess from another point of view the main competition for what GCC did in 2000 has been new programming languages and Python, almost all of which are free software

@kragen @enkiv2 @dredmorbius @zardoz @kick @freakazoid
SSB uses progressively signed JSON, where the text of the JSON gets hashed and the hash is added to the end. It also uses keys. Key order isn't defined in JSON so all implementations, for compatibility reasons, must use the order that happened to be produced by nodejs when the first SSB message was composed. This has been a barrier to non-v8-based clients (though a rust one exists now).

@dredmorbius @enkiv2 @kick @freakazoid
I find RTBF problematic because it's not very useful in the absence of norms against personal archiving (itself a problematic thing). We're better off developing norms about carefully checking the context around claims of wrongdoing before acting on or spreading those claims -- something that becomes easier when public information cannot be modified after publication. That's a tangent even by the standards of this thread tho

@enkiv2 @dredmorbius @freakazoid I need an image macro for "That technical problem is too hard! Let's change the world instead!"

NB. You may very well be right, but it still feels very comical in some way.

@enkiv2 @dredmorbius @kick Bang paths in Usenet serve a different purpose than in UUCP. They're used to prevent loops and to trace the path a message took, not for routing. Usenet messages are flooded to each peer that's configured to receive them, so there's no need to specify a route.

@enkiv2 @dredmorbius @kick The idea that norms can solve this problem is incredibly naive. Norms aren't going to fix it when someone hacks into your computer and videos you masturbating to something perfectly legal but weird and then publishes it all over the 'net. The average person isn't going to become enlightened enough in our lifetimes for this not to cause significant harm to someone.

@enkiv2 @dredmorbius @kick I'd say exactly the opposite. Immutability doesn't give nearly enough benefit to be worth not being able to unpublish things.

@enkiv2 @dredmorbius @kick Systems like the Wayback Machine exist in a gray area right now. They take down stuff when asked, but it's a PITA to ask everyone who might mirror a piece of content. There needs to be a standard for this, such that when you have a valid order to take something down, any site that mirrors the content can process that automatically.

@enkiv2 @dredmorbius @kick I'm fine with the system being voluntary; sites that don't comply being forced underground - through norms - solves most of the problem.

@freakazoid It's worth noting that what's perfectly legal _here_ and _now_ might not be some in some _there_ and _then_.

A fact which the pr0n field has had to deal with in more than one time and/or jurisdiction.

Hell: the contraceptives-information field, to cite something that's entirely quotidian in much of the world now.

@enkiv2 @kick

@freakazoid There's also a wide space between "destroy all extant copies" and "embargo publication until the principals and associates are well dead", as is presently the case for many personal materials.

Note that this need not merely be the _author_, but also those directly affected or mentioned. Sometimes additional others (descendants / associates).

But at some point, "private" _should_ pass into "common history". Usually.

Cultural appropriation remains an argument.

@enkiv2 @kick

@freakazoid And just to take that last an extend it further.

Suppose some artefacts are found in a location and appropriated by the discoverers, who are non-native to the place.

An indigenous claim is raised, based on cultural association with the space.

It's determined that the claimants aren't of the culture creating the artefacts, but another which subsequently occupied the territory.

Wiping out the creators.

But not before interbreeding, perhaps as war spoils, such ...

@enkiv2 @kick

@freakazoid ... that there are possible descendants.

How should matters be adjudicated?

(I'm not saying any one claim or right is correct or wrong. Only that the story can become ... complicated. Multiple alternate variants might be formulated. Many with factual basis according to anthropological records.)

@enkiv2 @kick

@freakazoid @enkiv2 @dredmorbius @kick
Depends on how much you take advantage of it. Immutability is rare so we basically don't have tech that uses it. (In plt, we have functional languages, which are basically just a matter of saying "if all variables are immutable what does that mean". Outside of plt, it's much more rare!) Social ramifications of immutability can be great or terrible depending on how we engineer norms around it.

@kick @enkiv2 @dredmorbius @freakazoid
It's more a matter of: the social problem cannot be fixed by a technical change, so we should employ a social change instead. No matter what we do on a technical level, we can't really move the needle on this.

@enkiv2 @kick @dredmorbius @freakazoid
Changing norms is harder than employing technical systems because power is not as lopsided. To change norms, you need buy-in from most participants; to change tech, you just need to be part of the small privileged group who controls commit access. This is why it's so important, though. Norms aren't set in stone but they'll only change if you can actually convince people that changing their habits is a good idea!

@enkiv2 @kick @dredmorbius @freakazoid
Most people online have had bad experiences with people weaponizing out-of-context information -- that's why technical solutions like RTBF exist. RTBF not actually working, while simultaneously pushing power into the hands of centralized corporate services, is obvious to most people too. Saying "it's impolite to dogpile on somebody without checking whether or not you've been misled first" is way less extreme.

@enkiv2 @kick @dredmorbius @freakazoid
Re: the speed at which norms can change, consider content warnings. They went from something that only a handful of folks with PhDs trying to work out experimental ways to avoid meltdowns in extreme circumstances having even heard of them to something that everybody is aware of & only jerks believe are never justified in a matter of ten years. We still argue about when they're justified but there isn't a serious contingent against using them at all.

@enkiv2 @kick @dredmorbius I'm not arguing for RTBF. I'm arguing for not making it impossible to unpublish content.

CWs are nowhere near universal and the fact that they're not proves my point quite nicely.

@enkiv2 @kick @dredmorbius There's also the fact that people deliberately exploit immutable systems to publish stuff that's damaging. For example, there's kiddie porn in the Bitcoin blockchain.

@freakazoid @enkiv2 @kick @dredmorbius
CWs are not universal, but they are near-universal in communities in which they exist at all. The risk of immutability of public material is mostly around blackmail (which only works within one's in-group or in groups where one is forced to operate) & centralized enforcement (which can't be performed against sufficiently large groups). When immutability & good norms around context coincide, outsiders are largely irrelevant -- unless it ultimately loses.

@freakazoid @enkiv2 @kick @dredmorbius
This is a fair point, though I wouldn't pick CP as a good example of infohazard. Depending on one's model, CP is contraband either because a market for it incentivizes abuse or because exposure to it incentivizes abuse. Under the former model, having it on the blockchain lowers abuse potential. Obviously a complex & emotionally charged topic (even more so than "if you burn a million dollars does the value of a dollar bill go up or down")

@enkiv2 @freakazoid @kick @dredmorbius
The risk profile of putting contraband or blackmail material on a blockchain is basically the same as the risk profile of keeping a copy on paper in a safety deposit box & periodically mailing out photocopies -- except that this latter *only* works for people with an incentive to store info indefinitely. In other words, it puts the power to select what gets remembered in the hands of whoever thinks they will want to distribute it in the far future.

@enkiv2 @freakazoid @kick @dredmorbius
Really, norm-based solutions can't work unless practically everything is immutable either. If everything is immutable then context can be retrieved in the future even if nobody thought to preserve it at the time. This functionally defangs blackmail because lies-by-omission are not backed up by layers of friction between everybody & whatever information was omitted.

@enkiv2 @kick @dredmorbius I don't see how things' not being unpublishable could defang blackmail. Blackmail will just apply to information that hasn't been published in the first place.

This goes beyond mere disagreement; this is a system I would kill to stop.

@enkiv2 @kick @dredmorbius This is the argument 4channers make against outlawing revenge porn. "Women just need to learn to stop allowing boyfriends to photograph them naked, or accept that naked pictures of them are going to be on the Internet."

No. We live in a society. You publish shit that hurts someone else, you get hurt yourself.

@freakazoid @enkiv2 @kick @dredmorbius
I'm not sure, in that case, what risk profile you're talking about. Are we talking about a case where someone publishes something about themselves that they later regret? Where someone publishes something about themselves & another party takes it out of context? Or where someone publishes information about someone else without permission?

@enkiv2 @freakazoid @kick @dredmorbius
I can't think of an example of a problem that being able to unpublish only things that you yourselve have published will reliably solve, in a world where backups & blackmailers exist. (It solves the pseudo-problem of deciding that a post you've published is potentially risky and undoing it before it has actually caused a problem. I don't think that's what you're talking about, though.)

@enkiv2 @freakazoid @kick @dredmorbius
And, on the other hand, unpublishing what *other people* have published doesn't appear to be on the table. It has a lot of issues and complications, & is generally handled by lawsuits or by corporate simulations of lawsuit-style deliberation. It can be handled by admin fiat in federated systems but scaling to distributed systems means it becomes a per-post version of transitive blocking. (Cancel messages, etc.)

@enkiv2 @kick @dredmorbius My goal is to make it easy to indicate to people who don't want to publish stuff against the will of folks who are impacted by it that you'd like them to take it down.

@enkiv2 @kick @dredmorbius The situation is one: even though they will take stuff down on request you have to separately ask them and everyone else.

Yes, there will be attempts to abuse such a system, which is why it should not be legislated into place by government but built by people who want to have a robust publication system that at least makes an attempt to minimize harm.

@enkiv2 @kick @dredmorbius I think the big issue here is reachability vs discoverability. This was an issue Mark Zuckerberg did not understand when designing graph search, until Facebook employees practically revolted and told him that it was a bad idea to let people bypass permissions like friends list visibility just because it was possible to construct someone's friends list by scraping others' pages. It's also encountered when public records go online.

@enkiv2 @dredmorbius @freakazoid CWs were an organic change, first-propagated in relatively insular communities where norms permitted (non-religious) preaching & crowd-shaming. Not sure how that change would go if not sparked in a similar community, but there doesn't seem to be a similar community at the moment.

@freakazoid @enkiv2 @dredmorbius Why is it always 4chan users who get blamed for bad culture on the internet? It's literally the queerest place on the entire network, yet without exception it gets blamed for the things that redditors are primarily responsible for.

@freakazoid @dredmorbius @enkiv2 *No. We live in a society. You publish shit that hurts someone else, you get hurt yourself.*

This is a slippery and stupid slope, and it justifies what's currently happening to people like Snowden, Manning and Assange, despite them not doing anything that was actually morally wrong. I'd accept a claim like this with reduced scope, but as it stands that's way too wide.

@freakazoid @enkiv2 @dredmorbius Graph search lasted for six years with full functionality, and it doesn’t seem like it was that bad of a solution for Facebook.

Also, it wasn’t designed by Zuckerberg, it was designed by Google employees.

(And further, it was a great idea. So much was dug up on politicians because of it that the world was in an undeniably better spot.)

@kick @enkiv2 @dredmorbius The solution they put into place for graph search was that you could only search edges that were accessible to you in both directions. It wasn't a fundamental problem with graph search, just a problem with how Zuck was thinking about the permissions model.

@kick @enkiv2 @dredmorbius "sometimes people get punished for things we don't think they should be punished for" is not an argument in favor of not having any limits at all, so I'm not super interested in debating it.

@kick @enkiv2 @dredmorbius Super uninterested in a 4chan vs Reddit debate. I couldn't care less about 4chan getting blamed for terrible shit they aren't actually responsible for given all the terrible shit they (or rather the shitheads they allowed to take over) were responsible for.

@kick @enkiv2 @dredmorbius Actually I'm being too generous. The folks there were plenty comfortable with racist, homophobic, and transphobic language from the very beginning. If Moot had deliberately set out to build a Nazi indoctrination camp, I have no idea what he would have done differently.

@kick @enkiv2 @dredmorbius Zuck was the product owner. I'm aware former Google employees designed the tech; I worked there for the entire time it was being designed and used the internal versions of the same technology.

Please take your arrogance elsewhere.

@kick @enkiv2 @dredmorbius @freakazoid I suspect that probably every competent spy agency is delighted with their chance to blackmail current and future politicians with the data they scraped, or to figure out how to get agents close to them, or who their family members are. Journalists don't seem to be doing much with this data, although I could be wrong about that? I suppose nobody can publicly admit to having it at this point.

@freakazoid @enkiv2 @kick @dredmorbius i feel like making it possible to unpublish information requires criminalizing private information sharing and archival and anonymous communication; is there a less severe way?

@enkiv2 @kick @dredmorbius It's not a question of unpublishing what others have solved. It's about supporting the ability to ask that others unpublish things they have published. It need neither be reliable nor perfect in order to reduce harm. But it needs to exist.

@freakazoid @enkiv2 @dredmorbius

You're being kind of ridiculous, which is kind of frustrating to see from someone who otherwise has been mostly at least together, view-wise.

There's a board dedicated to queer people (three of them if you include boards dedicated to queer anime/manga/etc), 90% of boards have zero political discussion (I'm not joking about this, some boards even ban it if I recall correctly), and Moot wasn't "comfortable" with any of that stuff; he's not a Nazi nor Nazi sympathizer, hell, he works at Google now.

He (rightfully) believed that spaces where people can interact without identifying themselves are important, which is the correct view to have.

@enkiv2 @kick @dredmorbius At any rate I feel that I've given conclusive proof that this needs to exist. If you remain unconvinced then there seems to be little point in my expending additional effort trying to convince you.

@freakazoid @enkiv2 @dredmorbius You worked there? Using your logic from elsewhere in this thread: why were you willingly ruining society?

Facebook was controversial from the outset, and by the year that that product was launched, people knew what it was doing pretty well (and it's not like Facebook employees couldn't get jobs elsewhere).

@kragen @enkiv2 @dredmorbius @freakazoid They did for years for source hunting, if I remember correctly, though I'll admit I may not remember correctly.

@kick @enkiv2 @dredmorbius @freakazoid Oh, interesting! I'd like to find out more if you find something.

@kragen @enkiv2 @dredmorbius @freakazoid Luckily, the Wikipedia article looks like it mentions an occurrence (I wasn’t aware of this one, actually). Bellingcat (which is a low-volume but very interesting investigative publication) apparently used it pretty heavily.

(A quote from the article that Wikipedia cites: “Now that Graph Search has gone down, it’s become evident that it’s used by some incredibly important section[s] of society, from human rights investigators and citizens wanting to hold their countries to account, to police investigating people trafficking and sexual slavery, to emergency responders,” Waters told Motherboard in an online chat.)

@dredmorbius @enkiv2 @freakazoid @kick
PLT=programming language theory

@kragen @freakazoid @enkiv2 @kick @dredmorbius
This is sort of my point -- either we have an 'ask nicely' or we have a state-enforced 'ask nicely', & 'ask nicely' without state enforcement is a social norm thing.

@enkiv2 @kragen @freakazoid @dredmorbius For certain classes of published stuff, in Europe we already have legal ways to demand unpublication and the sky has not fallen as a result. And I think it's forcing techies in EU to (usually grudgingly) accept that technical choices are rarely free of social consequences.

@enkiv2 @dredmorbius @freakazoid @kick

Python, Linux, Tomato

@freakazoid @enkiv2 @kick @dredmorbius
I'm not opposed to ramification for bad behavior. I'm trying to figure out how to encourage punishment to be equitable. Part of that is preventing motivated misrepresentation (and power asymmetry in misrepresentation). Right now would-be blackmailers choose what gets to become history, so they can spin anything as a sin.

@freakazoid @enkiv2 @kick @dredmorbius
OK. I'm fine with that, and most mature systems for static content have facilities for that (ex., IPFS has a hash blacklist for both fetching & forwarding that's basically the same as a killfile, along with mechanisms for folks to share these blacklists with each other).

@freakazoid @enkiv2 @kick @dredmorbius
Absolutely! I've sort of been arguing for this. When I pushed transitive blocking over unpublishing, it's because I think the biggest issue is the flatness of addresses/access: folks outside your group, who do not share your norms, can read your messages and force replies on you.

@cathal @enkiv2 @freakazoid @dredmorbius I am skeptical; we will see how much longer the sky of open societies remains standing. Brexit, BoJo, the yellow jackets, Erdogan, Orban, and the Ukraine crisis do not seem like promising developments.

@freakazoid @enkiv2 @kick @dredmorbius
OK, yeah, I'm perfectly fine with this as harm reduction. I wouldn't call it 'unpublishing' because on a technical level, on a service that otherwise supported static content, it would be implemented as a blacklist of addresses (which eventually would become un-hosted as the number of nodes with a copy approached zero).

@kick @enkiv2 @dredmorbius @freakazoid
Well, the communities I had in mind were IPFS and SSB (mostly SSB -- IPFS is much more tech-libertarian with a civil-libertarian streak, while the core SSB developers are very interested in the problem of community norms & how to deal with an environment of high speed off-the-cuff communication with permanent messages).

@cathal @enkiv2 @kragen @freakazoid @dredmorbius
RTBF has some issues, mostly because it's a mechanism where the EU deals directly with Google. It's hard to see how it would apply to somebody running a site off their home internet connection, let alone a p2p system. It's not like it doesn't do some good, but because of its structure it's limited & increases the de-facto power of the stacks.

@enkiv2 @kragen @freakazoid @dredmorbius Well, the RTBF got codified and generalised significantly by the GDPR - the right to demand the amendment of false information, to require delisting or deletion of personally identifying data that is not in the public interest, all that looks like RTBF to me.

@cathal @enkiv2 @kragen @freakazoid @dredmorbius
Fair enough. I always considered the primary result of the GDPR to be more reasonable defaults about data collection -- nobody will actually *agree* to all their traffic being vacuumed up the way it has been, given the choice, & it can't be justified as necessary, so if you want to do business in the EU you just delete logs more often and ditch the tracking pixels.