Asked  7 Months ago    Answers:  5   Viewed   28 times

I am curious as to how programs such as gitolite work -- specifically how do they interact with the SSH protocol to provide a tailored experience. Can somebody provide an example of how I might accomplish something like the following and where I might learn more about this topic?

? ssh git@github.com
PTY allocation request failed on channel 0
Hi <username>! You've successfully authenticated, but GitHub does not provide shell access.
Connection to github.com closed.

A side question: my primary language is JavaScript. Is it possible to accomplish what I want with NodeJS?

 Answers

56

gitolite in itself is an authorization layer which doesn't need ssh.
It only needs to know who is calling it, in order to authorize or not that person to do git commands.

SSH is used for authentication (but you can use an Http Apache for authentication as well, for instance)

The way gitolite is called by ssh is explained in "Gitolite and ssh", and uses the ssh mechanism forced command:

http://oreilly.com/catalog/sshtdg/chapter/ssh_0802.gif

The ~/.ssh/authorized_keys (on the gitolite ssh server) looks like:

command="[path]/gitolite-shell sitaram",[more options] ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA18S2t...
command="[path]/gitolite-shell usertwo",[more options] ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArXtCT...

First, it finds out which of the public keys in this file match the incoming login. Once the match has been found, it will run the command given on that line; e.g., if I logged in, it would run [path]/gitolite-shell sitaram.
So the first thing to note is that such users do not get "shell access", which is good!

(forced command = no interactive shell session: it will only provide a restricted shell, executing only one script, always the same)

Before running the command, however, sshd sets up an environment variable called SSH_ORIGINAL_COMMAND which contains the actual git command that your workstation sent out.
This is the command that would have run if you did not have the command= part in the authorized keys file.

When gitolite-shell gets control, it looks at the first argument ("sitaram", "usertwo", etc) to determine who you are. It then looks at the SSH_ORIGINAL_COMMAND variable to find out which repository you want to access, and whether you're reading or writing.

Now that it has a user, repository, and access requested (read/write), gitolite looks at its config file, and either allows or rejects the request.

The fact that the authorized_keys calls a perl script (gitolite-shell) is because Gitolite is written in perl.
It could very well call a javascript program.


If your ssh on GitHub without any command, you get a greeting message, like your mention in your question.
Gitolite displays a similar message, as detailed in the print_version() function of the info command script:

sub print_version {
    chomp( my $hn = `hostname -s 2>/dev/null || hostname` );
    my $gv = substr( `git --version`, 12 );
    $ENV{GL_USER} or _die "GL_USER not set";
    print "hello $ENV{GL_USER}, this is " . ($ENV{USER} || "httpd") . "@$hn running gitolite3 " . version() . " on git $gvn";
}

The message looks like:

hello admin, this is git@server running gitolite3 v3.0-12-ge0ed141 on git 1.7.3.4

The late 2013 Gitolite documentation now includes that diagram which summarizes all the pieces:

ssh and Gitolite

Tuesday, June 1, 2021
 
insomiac
answered 7 Months ago
25

They are hint to the compiler to emit instructions that will cause branch prediction to favour the "likely" side of a jump instruction. This can be a big win, if the prediction is correct it means that the jump instruction is basically free and will take zero cycles. On the other hand if the prediction is wrong, then it means the processor pipeline needs to be flushed and it can cost several cycles. So long as the prediction is correct most of the time, this will tend to be good for performance.

Like all such performance optimisations you should only do it after extensive profiling to ensure the code really is in a bottleneck, and probably given the micro nature, that it is being run in a tight loop. Generally the Linux developers are pretty experienced so I would imagine they would have done that. They don't really care too much about portability as they only target gcc, and they have a very close idea of the assembly they want it to generate.

Tuesday, June 1, 2021
 
rypskar
answered 7 Months ago
11

Your config defines an alias /git/ which will call your gitolite wrapper.
That means it will call it only for addresses like yourServer/git/...

You should at least try your clone with:

git clone http://myServer/git/testing

As the OP hoistyler references in this answer, the only remaining issue was an authentication one based on a file-based login.

Sunday, August 15, 2021
 
footy
answered 4 Months ago
87

Your config file should contain, to be sure, the full path of your private/public key:

Identityfile /path/to/gitolite

Beside that, make sure $HOME is the same in both cases, and that you are running those commands with the same user id, to rule out any right access.

Most interesting is when I'm changing my .git/config's url variable from git:poky to gitolite@git.myserver.com:poky then everything works fine.

That means your public/private key (named ~/.ssh/gitolite(.pub)) is duplicated as ~/.ssh/id_rsa(.pub), which is the default name for those keys, as searched by ssh.

Thursday, August 19, 2021
 
some_bloody_fool
answered 4 Months ago
95

I believe this problem is non technology specific.
Since your processing jobs are long running, I suggest these jobs should report their progress during execution. In this way a job which has not reported progress for a substantial substantial duration becomes a clear candidate for cleanup and then can be restarted on another worker role.
How you record progress and do job swapping is upto you. One approach is to use database as recording mechanism and creating an agent worker process that pings the job progress table. In case the worker process determines any problems it can take corrective actions.

Other approach would be to associate the worker role identification with the long running process. The worker roles can communicate their health status using some sort of heart beat.
Had the jobs not been long running you could have flagged the start time of job instead on status flag and could have used the timeout mechanism to determine whether the processing has failed.

Friday, August 27, 2021
 
dzm
answered 4 Months ago
dzm
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share