Where to host issues

Repos in forges are a “wild” example
where sometimes locking, banning etc. are done. And project
team should, I think, have the ability to say “this issue’s
text is of high importance, no edits allowed”.

since when is board moderation considered to be “wild”? -
i dont think that any forge has a feature to ban specific users
from posting; but some have the feature to restrict posting to
team members - in both cases though, this is non-negotiable - if
the forge allows the repo maintainer to do something, or does
not have a feature to allow the repo maintainer to do something,
then there is nothing that forge-fed can do otherwise - if
forge-fed specifies some feature that the forge does not have,
it is a useless appendage, until some forge does have that
feature - likewise, it would be just as pointless for forge-fed
to try preventing a forge from behaving in its normal way

i can only suggest as before, that this is trying to do too much
for the first iteration - trying to fully de-centralize the
entire forge will take much more time than simply allowing
forges to inter-operate; and i dont think that anyone is
expecting any more than that - people would be very happy if it
only handled the basics; and that would be a great way to get
more people interested sooner - so it would be best to get to
something that can be demonstrated as fully usable, sooner than
later, then to work on innovative features afterward

Hmm I’m not sure :-/ when people HTTP GET the issue,

i refrained from commenting on that before - that does not sound
like something that people expect forges to do - when someone
wants to see a ticket on a forge, they point their web browser to
http://the-forge/a-repo/tickets/N - if that was suggesting other
ways to fetch data from the forge; that is something else beyond
what most people will actually want to do - some do have APIs
for that already, for those who want that sort of raw access; so
its not adding anything new really - its a great idea to allow
or future clients; but non-essential - so, there no reason not
to defer it for a future revision

and people believe that lie for a whole week
because repo team didn’t get notified on the
malicious/accidental edit.

as i wrote before, unless there is some way to verify that some
clone/mirror has the identical data, in the same context, as
corresponding exactly to the data in the database of the one
specific forge that is the one used by the upstream dev team,
then it is not a reliable source of information - no one should
believe a word of it, nor that the upstream dev team has ever
seen it

It’s also possible to just send Reject on CoC
violating comments, or comments on issues where commenting is
disabled. Trouble is, how can repo team be sure it’s enforced?

i dont quite understand the proposal of “accept” and “reject” -
if the forge has posting disabled for that tracker, then it is
impossible to “accept” anything - if the forge is allowing posts
on that tracker, and it has no button on its interface like:
“would you like to reject this post before it goes public?”;
then that would be impossible too

over-all, most of this discussion, is of the sort: things that,
could be possible someday, in theory; but are beyond what
forges do today, or will do in the foreseeable future -
someone would need to write that code first

https://repo.tld and the issue claims that
https://jane.tld opened this issue at 2016-12-19, how can
you be sure that this is what really happened? [/quote]

that bit should be simple - every message must be signed by a
key belonging to author - it will necessarily be verified by
the forge; and it should be verifiable independently by anyone

i think the perceived problem there is that the person who
operated ‘jane.tld’ in 2016 may not the same person who operates
it today - but thats not really a problem; unless people are too
lazy to check the signature - if the person who operates
‘jane.tld’ today, does not have the same key; then it is not the
same identity; and the forge would create a new registration
upon receiving any message signed with an unknown key

likewise, it should be irrelevant if the operator of ‘jane.tld’
in 2016 now operates ‘janes-new.tld’ instead - anyone who has
same signing key; would be represented on the forge as the same
phantom user; regardless of the domain name used by the server
with that signing key - it should be totally possible to operate
a forge-fed-compatible forge without registering any domain name

Create/Offer/Ticket that you host as proof you’re the author,
so in both approaches authorship isn’t lost.

regardless of the AP mechanism, authorship will never be in doubt
from the perspective of the users of the destination forge - the
message carrying the original post was signed by the author -
then the destination forge put the post text into its database,
attributed to the phantom user which it has associated with that
signing key - there is no other authority - if activity-pub will
have trouble representing the authority of the forge database
accurately, then there would need to be some clear standard
disclaimer, that this information or the identity of its author
does not necessarily agree with the forge used by the team that
is supposedly handling it

no one should need to host anything, in order to prove
authorship - the original signature should be sufficient - if
the “owner” of the AP object can not be the author, then it
would need to carry with it, the public key of the author, and
any interface presenting it, would need to be able to recognize
the key in the payload, and attribute the message to the
original author

just imagine if the only way to prove that you signed an email,
was to host a copy of it yourself - then what if your website
goes offline or you change domain names? - then no one could
verify your signature for any email youve ever sent? - only the
public key needs to be available; and that key will necessarily
be put into the database of the forge, the first time someone
interacts with the forge using forge-fed - that is when the
phantom forge user will be created; and that key is the one and
only thing that authenticates the sender as the phantom user,
to which all interactions with the forge are attributed

if it’s the right thing to do (repo team is the
party with personal interest in issue authenticity and
correctness and availability through its lifetime

this is another of those “non-negotiable” things - it is not
merely the “right thing to do”; it is the only possible way to
handle it - if the data is not in the forge database, then as
far as the project is concerned, it is not a formal part the
project - once the data enters the forge database, it becomes
part of the project formally, accessible via its web interface;
and is managed by the project team - it is literally impossible
for anyone else to manage it; because forges do not allow anyone
to manage their state without the proper credentials to do so -
adding new tickets and comments is one thing; but no one is
going to want randos changing the state of their tickets

And you can’t even toot some message to repo
followers because repo server isn’t online to do inbox
forwarding. On the other hand, a repo with issues hosted on
lower-reliability servers can be a big pain.

that is always a concern in any network - servers must re-send
anything they want to ensure gets sent; and servers must poll
around occasionally to collect anything it missed or was dropped

  • reliability is not to be taken for granted, even with a
    completely centralized service - people should be able to host
    forge-fed-compatible forges on their laptops; without leaving
    them powered on 24/7 - scuzzlebutt would be idea for that;
    because it enforces retrying and polling - admins will just need
    to be content with missing and/or dropping some messages, or be
    more diligent to account for that inevitability

BTW, there is a complete scuzzlebutt-based forge that is quite
feature-rich already, and looks to be under fairly active
development - just putting that out there :slight_smile:

When you send bug or patch
by email, it’s visible and hosted on project public mailing
list, and proof of authenticity is via your PGP signature,

When you send a patch/MR to a git repo, should you host the
git objects resulting from merging your code?

my thoughts exactly - you dont need to host it yourself; because
the data has already reached its final intended destination -
that final destination has everything it needs to handle the
data for evermore into the future, including to notify everyone
else of what happened, and to mediate requests to replicate the
data

either way we need to do access
control and will probably use OCAPs for this

theres another something that baffles me - there should be
nothing to discuss regarding authorization - every forge handles
authorization already, and does so in its own way - that is
non-negotiable - there is nothing that forge-fed could tack on,
that would change the fact that the forge controls access to
its state - forge-fed only needs to send requests, that the forge
can authenticate - the forges can and will decide how to handle
them, and whether to handle them at all

(repo team) isn’t the entity hosting the controlled object
(the issue).

again, yes they are - this is not a “should be” or even a “must
be” - it is an flat: “that are doing that” - if they were not
hosting the issue, then it would not be their issue - it would
be someone else’s - if “the issue” is not in the database where
the state of their issues is kept, then it is clearly not being
handled by the project in any meaningful way - it may not even
be known to them - you can not handle something that lives “in
the cloud” - it needs to be “in your hands”

regarding the main question of the “create” vs “offer”
distinction: that may have some significance for the fediverse;
but the concept of an “offer” is vacuous in the context of
forges - as i was led to understand, activity-pub specifies the
four basic CRUD operations - those are all of the message types
that are needed in order to interact with a forge in every
way that they can handle

if the data is not related to some resource that is yet in the
database, then you send a “create” request for a new resource to
be created - then the server would typically return a URL, to
where the newly created resource can be accessed - to modify any
data that is already in the database, then you send an “update”
request, referencing the the existing resource - if activity-pub
is used in any other way, i dont see how it can accurately
represent how forges actually operate

the “offer” concept seems to be about transferring ownership of
the data - practically speaking, all data sent to a server is an
“offer” - if the data is accepted into the database, the
database is the “owner” and inherently has total authority over
it forevermore - it makes no difference how the data was
originally received; and anything that disagrees with the
database is incorrect

again, if the goal was to make a completely new forge system,
that had no central database (like the scuzzlebutt forge), the
special activity-pub messages, and the idea that something can be
floating around the cloud, and replicated anywhere , yet has an
owner, might make sense; but i dont see how that forge system
could be compatible with the existing database-backed forges that
forge-fed is targeting

1 Like

This was a long one. I’ll keep it short as I’m at work :slight_smile: and it’s still a lot to process.

I like where the general consensus ended up.

I do agree with @bill-auger in that I’m apprehensive about having issues stored anywhere other than the respective repository. But that’s not necessarily a spec concern.

I think the core things we should focus on are representing a repository as an actor and creating issues on other forges.

Plus of course allowing users to ‘log in’ on a forge with an account on another forge. Which as mentioned should just create a phantom user and store the signing key of that actor for future reference.

there are implications for the spec though - if tickets are
allowed to be kept canonically, on any instance other than the
one which the project team actually uses, then this is no longer
a federated system, but essentially a (much more complex)
distributed system

the practical implication, is that authorization would also need
to be distributed; meaning that every clone would also need to
replicate the original repository ACL somehow, whether or not
anyone from the project will ever use the foreign instance, or
even knows of its existence - that entails new forge-fed
messages for that functionality, and another (rather complex)
work item for implementers; otherwise, there would need to be
some note in the docs, indicating that this feature wont actually
work as expected, unless each fork maintainer manages the ACL
manually - of course, if it is the fork maintainer, and not the
upstream project maintainers who manages the ACL; then it is not
really the same project anymore - it is essentially an
independently-controlled fork, which automatically pollutes the
upstream tracker with it’s tickets - worse though, that in order
to behave as expected, it expects the upstream maintainers to
yield control over the state of their tracker database, to the
maintainers of any fork, bypassing the ACL

i think it is safe to assume that no maintainer is going to want
to yield control over the state of their database; so the
alternative practical implication, is that it would necessarily
introduce inconsistency - if that ticket is federated back to
the upstream project’s instance, it will necessarily be
read/write on that instance anyways; and if the state of that
ticket changes on the fork instance (deleted, closed, etc), it
will not necessarily be reflected on the tracker where it
actually matters: the one on the upstream project’s instance

in theory, forge-fed could also specify a mechanism to maintain
consistent ACLs across all forks; but that would do nothing to
guarantee consistency of the number of tickets and their states
across instances; without making the upstream tracker a slave to
the database of every such fork - the only way to keep them all
consistent, would be to require every fork (including the
upstream) to respect the authority of every other fork,
regarding the state of tickets - that aside, requiring a
mechanism for maintaining consistent ACLs is quite a complex
implementation detail for implementers, well beyond what
forge-fed can reasonably require

worse though, even if global ACLs were trivial to manage, it
necessarily implies one of two situations - either the
maintainer of a fork would not be able to control access to it’s
bug tracker, but would be a slave to a foreign database (which
essentially makes it a mirror, not a fork); or the project
maintainers would not be able to control access to the upstream
repo, but would be subject to the whims of literally ANY
downstream - in both cases, someone has forfeited control over
their computing - in the former case, this is a mirror, so that
is expected; but in the latter case, the upstream maintainers
have forfeited control over the management of their project -
the latter is obviously absurd, so mandatory ACL replications,
would only be acceptable if the upstream manages the ACL for all
forks, as mirrors

there are only two ways this can be done simply and reasonably,
while retaining consistency - either forks are fully independent
development paths, with fully independent authorization; or they
are read-only slave mirrors, with no authorization (other than
destroying the local mirror entirely) - that is nothing novel, of
course - it is exactly the separation most people will expect -
anything else is some form of distributed forge; which would
entail a distributed (read: inconsistent and chaotic)
development methodology - i find it hard to believe that any
maintainer would want to manage their project that way

this was not to say that distributed bug tracking should be
disallowed; but any distributed bug tracking system would need
an intermediate bridge, to maintain sanity with respect to the
authority of the upstream database (i.e. to behave as if it were
not distributed) - that would defer the implementation burden
onto those who actually want to use a distributed bug tracker;
and avoid imposing it on everyone

There’s no need to replicate ACLs. With publicly verifiable OCAPs, all you need to do basically is verify a cryptographic signature. And otherwise, you can rely on the project actor sending an activity (probably Accept) to let you know it approves a certain change.

Either way, FYI, both Create and Offer flows are in the spec, and neither is strictly required. You can be spec compliant without author-hosted tickets. Right now they’re the recommended default and we’ll see very soon in Vervis how they work in practice.

1 Like