Skip to content
/ pisshoff Public

🧸 fully isolated honeypot ssh server using thrussh

License

Notifications You must be signed in to change notification settings

w4/pisshoff

Repository files navigation

pisshoff

A very simple SSH server using thrussh that exposes mocked versions of a bash shell, some commands and SSH subsystems to act as a honeypot for would-be crackers.

All actions undertaken on the connection by the client are recorded in JSON format in an audit log file.

What does the server expose?

Commands

  • echo
  • exit
  • ls
  • pwd
  • scp
  • uname
  • whoami

Subsystems

  • shell
  • sftp

How?

None of the commands or utilities shell out or otherwise interact with your operating system, you can essentially consider the honeypot "airgapped". Although for all intents and purposes it feels like you're connecting to an actual server, you're actually interacting with very simple partial reimplementations of common commands and utilities that don't do anything but return the expected output and write to an audit log.

Example

$ ssh root@127.0.0.1
bash-5.1$ pwd
/root
bash-5.1$ echo test
test
bash-5.1$ uname -a
Linux cd5079c0d642 5.15.49 #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 x86_64 GNU/Linux
bash-5.1$ whoami
root
bash-5.1$ exit
$ echo test > test
$ scp test root@127.0.0.1:test
(root@127.0.0.1) Password:
test                                                                                                      100%    5     0.1KB/s   00:00
$ cat audit.log | tail -n 2 | jq
{
  "connection_id": "464d87c9-e8fc-4d24-ab6f-34ee67b094f5",
  "ts": "2023-08-10T20:46:09.837165036Z",
  "peer_address": "127.0.0.1:31732",
  "host": "my-cool-honeypot.dev",
  "environment_variables": [
    ["LC_TERMINAL_VERSION", "4.5.20"],
    ["LANG", "en_GB.UTF-8"],
    ["LC_TERMINAL", "iTerm2"]
  ],
  "events": [
    {
      "start_offset": {
        "secs": 1,
        "nanos": 362803172
      },
      "action": {
        "type": "login-attempt",
        "credential-type": "public-key",
        "kind": "ssh-ed25519",
        "fingerprint": "AAAAC3NzaC1lZDI1NTE5AAAAIK3kwN10QmXsnt7jlZ7mYWXdwjfBmgK3fIp5rji"
      }
    },
    {
      "start_offset": {
        "secs": 7,
        "nanos": 85973767
      },
      "action": {
        "type": "login-attempt",
        "credential-type": "username-password",
        "username": "root",
        "password": "root"
      }
    },
    {
      "start_offset": {
        "secs": 7,
        "nanos": 190169895
      },
      "action": {
        "type": "shell-requested"
      }
    },
    {
      "start_offset": {
        "secs": 11,
        "nanos": 153124524
      },
      "action": {
        "type": "exec-command",
        "args": ["pwd"]
      }
    },
    {
      "start_offset": {
        "secs": 14,
        "nanos": 342192712
      },
      "action": {
        "type": "exec-command",
        "args": ["echo", "test"]
      }
    },
    {
      "start_offset": {
        "secs": 63,
        "nanos": 599852779
      },
      "action": {
        "type": "exec-command",
        "args": ["uname", "-a"]
      }
    },
    {
      "start_offset": {
        "secs": 67,
        "nanos": 368327325
      },
      "action": {
        "type": "exec-command",
        "args": ["whoami"]
      }
    },
    {
      "start_offset": {
        "secs": 166,
        "nanos": 208707438
      },
      "action": {
        "type": "exec-command",
        "args": ["exit"]
      }
    }
  ]
}
{
  "...": "...",
  "events": [
    "...",
    {
      "start_offset": {
        "secs": 4,
        "nanos": 196898172
      },
      "action": {
        "type": "subsystem-request",
        "name": "sftp"
      }
    },
    {
      "start_offset": {
        "secs": 4,
        "nanos": 404745407
      },
      "action": {
        "type": "write-file",
        "path": "test",
        "content": [116, 101, 115, 116, 10] // test
      }
    }
  ]
}

Running the server

From source

An example configuration is provided within the repository, running the server is as simple as building the binary using cargo build --release and calling ./pisshoff-server -c config.toml.

NixOS

Running pisshoff on NixOS is extremely simple, simply import the module into your flake.nix and use the provided service:

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";

    pisshoff = {
      url = "github:w4/pisshoff";
      inputs.nixpkgs = "nixpkgs";
    };
  };

  outputs = { nixpkgs, ... }: {
    nixosConfigurations.mySystem = nixpkgs.lib.nixosSystem {
      modules = [
        pisshoff.nixosModules.default
        {
          services.pisshoff = {
            enable = true;
            settings = {
              listen-address = "127.0.0.1:2233";
              access-probability = "0.2";
              audit-output-file = "/var/log/pisshoff/audit.jsonl";
            };
          };
        }
        ...
      ];
    };
  };
}

Docker

Running pisshoff in Docker is also simple:

$ docker run -d --name pisshoff ghcr.io/w4/pisshoff:master
$ docker exec -it pisshoff tail -f audit.jsonl

About

🧸 fully isolated honeypot ssh server using thrussh

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages