Recent comments posted to this site:
Oh good question!
This gets a tiny bit into internals, but .git/annex/journal-private/ is
where the private information is stored. If you move the files from there
into .git/annex/journal/, they will be committed on the next run of
git-annex.
You would need to take care to avoid overwriting any existing files in the journal, usually there won't be any though.
Also unset annex.private of course.
git-annex findcomputed --inputs is documented to output one line per
input file. If it doesn't behave that way, file a bug.
It would be possible to run git-annex commands in the compute script if you were able to determine where the git repository was. I don't think git-annex sets anything in the environment that will help with that currently.
If the compute program set metadata though, it would re-set the same metadata when it's used to recompute the files. That might be undesirable behavior if the user has edited the metadata in the meantime.
@Katie, thanks for pointing out that doesn't work. I was able to fix that, so check out a daily build.
I am writing a external special remote using this protocol. This is little similar to the directory remote and there's a path on the local system where content is stored.
I don't want this location to be saved in the git-annex branch and I thought I'll be able to use GETGITREMOTENAME to persist it myself. However, I'm running into an issue where GETGITREMOTENAME fails during INITREMOTE (presumably since the remote has not yet been created). It does work during Prepare, but that feels a bit late to ask for a required piece of configuration.
What are my options? My ideal behavior would be if it behaves very similar to directory= field in directory remote, but I can hand-manage it too if that's the recommendation as long as I get some identifier for this remote (there can be multiple of these in the same repo)
The only time git-annex will complain about being unable to lock down a file on a remote is when you are dropping a file from a special remote, and the only copy is in another special remote.
drop foo (from dirremote...) (unsafe)
Unable to lock down 1 copy of file necessary to safely drop it.
These remotes do not support locking: otherdirremote
(Use --force to override this check, or adjust numcopies.)
In that situation, you can either use --force or git-annex get the file,
then drop from the remote, and then drop the file from the local repository.
The latter avoids any possible concurrency problems, but --force is of
course faster, and would be fine in your situation.
Dropping a file from a local repository that is present in a special remote does not have this problem.
That makes a lot of sense. So if I understood things right, the correct place to work on this is rclone. I think I'll try to ask what they think of this kind of use case.
Thanks for the explanation