Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPs are not getting assigned #21

Closed
asing012 opened this issue Jul 17, 2019 · 9 comments
Closed

IPs are not getting assigned #21

asing012 opened this issue Jul 17, 2019 · 9 comments

Comments

@asing012
Copy link

Hey guys,

I followed all the steps to setup proxycannon and I am able to connect to the instance using openvpn as well. But I don't see any IPs getting assigned when I do "while true;do curl ifconfig.co;done". It is giving me "curl: (6) Could not resolve host: ifconfig.co" error. I can also see all the instances running properly on AWS.

Using Kali linux for openvpn connection
Here is my openvpn trace:
Wed Jul 17 10:48:09 2019 WARNING: file 'client01.key' is group or others accessible Wed Jul 17 10:48:09 2019 WARNING: file 'ta.key' is group or others accessible Wed Jul 17 10:48:09 2019 OpenVPN 2.4.6 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Jul 30 2018 Wed Jul 17 10:48:09 2019 library versions: OpenSSL 1.1.1c 28 May 2019, LZO 2.10 Wed Jul 17 10:48:09 2019 Outgoing Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication Wed Jul 17 10:48:09 2019 Incoming Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication Wed Jul 17 10:48:09 2019 TCP/UDP: Preserving recently used remote address: [AF_INET]3.13.170.173:443 Wed Jul 17 10:48:09 2019 Socket Buffers: R=[87380->87380] S=[16384->16384] Wed Jul 17 10:48:09 2019 Attempting to establish TCP connection with [AF_INET]3.13.170.173:443 [nonblock] Wed Jul 17 10:48:10 2019 TCP connection established with [AF_INET]3.13.170.173:443 Wed Jul 17 10:48:10 2019 TCP_CLIENT link local: (not bound) Wed Jul 17 10:48:10 2019 TCP_CLIENT link remote: [AF_INET]3.13.170.173:443 Wed Jul 17 10:48:10 2019 TLS: Initial packet from [AF_INET]3.13.170.173:443, sid=5aee4d31 12ce02c7 Wed Jul 17 10:48:10 2019 VERIFY OK: depth=1, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, OU=MyOrganizationalUnit, CN=Fort-Funston CA, name=EasyRSA, emailAddress=me@myhost.mydomain Wed Jul 17 10:48:10 2019 VERIFY KU OK Wed Jul 17 10:48:10 2019 Validating certificate extended key usage Wed Jul 17 10:48:10 2019 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Wed Jul 17 10:48:10 2019 VERIFY EKU OK Wed Jul 17 10:48:10 2019 VERIFY OK: depth=0, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, OU=MyOrganizationalUnit, CN=server, name=EasyRSA, emailAddress=me@myhost.mydomain Wed Jul 17 10:48:10 2019 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, 2048 bit RSA Wed Jul 17 10:48:10 2019 [server] Peer Connection Initiated with [AF_INET]3.13.170.173:443 Wed Jul 17 10:48:11 2019 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Jul 17 10:48:11 2019 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 10.0.0.0 255.0.0.0 net_gateway,route 172.16.0.0 255.240.0.0 net_gateway,route 192.168.0.0 255.255.0.0 net_gateway,route 10.10.10.1,topology net30,ping 10,ping-restart 120,ifconfig 10.10.10.6 10.10.10.5,peer-id 0,cipher AES-256-GCM' Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: timers and/or timeouts modified Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: --ifconfig/up options modified Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: route options modified Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: peer-id set Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: adjusting link_mtu to 1626 Wed Jul 17 10:48:11 2019 OPTIONS IMPORT: data channel crypto options modified Wed Jul 17 10:48:11 2019 Data Channel: using negotiated cipher 'AES-256-GCM' Wed Jul 17 10:48:11 2019 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key Wed Jul 17 10:48:11 2019 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key Wed Jul 17 10:48:11 2019 ROUTE_GATEWAY 10.0.2.2/255.255.255.0 IFACE=eth0 HWADDR=08:00:27:a3:fd:32 Wed Jul 17 10:48:11 2019 TUN/TAP device tun0 opened Wed Jul 17 10:48:11 2019 TUN/TAP TX queue length set to 100 Wed Jul 17 10:48:11 2019 do_ifconfig, tt->did_ifconfig_ipv6_setup=0 Wed Jul 17 10:48:11 2019 /sbin/ip link set dev tun0 up mtu 1500 Wed Jul 17 10:48:11 2019 /sbin/ip addr add dev tun0 local 10.10.10.6 peer 10.10.10.5 Wed Jul 17 10:48:11 2019 /sbin/ip route add 3.13.170.173/32 via 10.0.2.2 Wed Jul 17 10:48:11 2019 /sbin/ip route add 0.0.0.0/1 via 10.10.10.5 Wed Jul 17 10:48:11 2019 /sbin/ip route add 128.0.0.0/1 via 10.10.10.5 Wed Jul 17 10:48:11 2019 /sbin/ip route add 10.0.0.0/8 via 10.0.2.2 Wed Jul 17 10:48:11 2019 /sbin/ip route add 172.16.0.0/12 via 10.0.2.2 Wed Jul 17 10:48:11 2019 /sbin/ip route add 192.168.0.0/16 via 10.0.2.2 Wed Jul 17 10:48:11 2019 /sbin/ip route add 10.10.10.1/32 via 10.10.10.5 Wed Jul 17 10:48:11 2019 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Wed Jul 17 10:48:11 2019 Initialization Sequence Completed
Any help would be appreciated :)

Thanks

@ccammilleri
Copy link
Contributor

A couple things I'd check to troubleshoot:

  1. from your kali vpn client can you verify that it is routing traffic out the vpn tunnel via cmd:
ip route get 1.1.1.1
  1. from the control server, can you confirm that exit nodes have been created by executing the following cmd in the proxycannon-ng/nodes/aws directory:
terraform show
  1. from the control server, can you verify that the routing table is appropriately load balancing to the exit nodes by executing command (you should see enteries for each exit node outputed from your previous terraform command):
ip route show table loadb

Paste your results here and this should give us an idea of whats wrong. Hope this helps!

@asing012
Copy link
Author

Thank you for replying back. Here are the results:

(1) 1.1.1.1 via 10.10.10.5 dev tun0 src 10.10.10.6 uid 1000
(2) aws_instance.exit-node.0: id = i-09d5a6dcf10c140af ami = ami-0f65671a86f061fcd arn = arn:aws:ec2:us-east-2:069605172168:instance/i-09d5a6dcf10c140af associate_public_ip_address = true availability_zone = us-east-2c cpu_core_count = 1 cpu_threads_per_core = 1 credit_specification.# = 1 credit_specification.0.cpu_credits = standard disable_api_termination = false ebs_block_device.# = 0 ebs_optimized = false ephemeral_block_device.# = 0 get_password_data = false iam_instance_profile = instance_state = running instance_type = t2.micro ipv6_address_count = 0 ipv6_addresses.# = 0 key_name = proxycannon monitoring = false network_interface.# = 0 password_data = placement_group = primary_network_interface_id = eni-04a3291ce40df0b7c private_dns = ip-172-31-43-74.us-east-2.compute.internal private_ip = 172.31.43.74 public_dns = ec2-3-19-53-172.us-east-2.compute.amazonaws.com public_ip = 3.19.53.172 root_block_device.# = 1 root_block_device.0.delete_on_termination = true root_block_device.0.iops = 100 root_block_device.0.volume_id = vol-08b8d78f25302e467 root_block_device.0.volume_size = 8 root_block_device.0.volume_type = gp2 security_groups.# = 1 security_groups.434247258 = exit-node-sec-group source_dest_check = false subnet_id = subnet-0dbfa140 tags.% = 1 tags.Name = exit-node tenancy = default volume_tags.% = 0 vpc_security_group_ids.# = 1 vpc_security_group_ids.2631189691 = sg-01ba836001bf3c336 aws_instance.exit-node.1: id = i-0c35be22cf67241c7 ami = ami-0f65671a86f061fcd arn = arn:aws:ec2:us-east-2:069605172168:instance/i-0c35be22cf67241c7 associate_public_ip_address = true availability_zone = us-east-2c cpu_core_count = 1 cpu_threads_per_core = 1 credit_specification.# = 1 credit_specification.0.cpu_credits = standard disable_api_termination = false ebs_block_device.# = 0 ebs_optimized = false ephemeral_block_device.# = 0 get_password_data = false iam_instance_profile = instance_state = running instance_type = t2.micro ipv6_address_count = 0 ipv6_addresses.# = 0 key_name = proxycannon monitoring = false network_interface.# = 0 password_data = placement_group = primary_network_interface_id = eni-0563de5771888d098 private_dns = ip-172-31-32-202.us-east-2.compute.internal private_ip = 172.31.32.202 public_dns = ec2-18-188-69-145.us-east-2.compute.amazonaws.com public_ip = 18.188.69.145 root_block_device.# = 1 root_block_device.0.delete_on_termination = true root_block_device.0.iops = 100 root_block_device.0.volume_id = vol-0eab5ad0f29cc32db root_block_device.0.volume_size = 8 root_block_device.0.volume_type = gp2 security_groups.# = 1 security_groups.434247258 = exit-node-sec-group source_dest_check = false subnet_id = subnet-0dbfa140 tags.% = 1 tags.Name = exit-node tenancy = default volume_tags.% = 0 vpc_security_group_ids.# = 1 vpc_security_group_ids.2631189691 = sg-01ba836001bf3c336 aws_instance.exit-node.2: id = i-02a81b37025168029 ami = ami-0f65671a86f061fcd arn = arn:aws:ec2:us-east-2:069605172168:instance/i-02a81b37025168029 associate_public_ip_address = true availability_zone = us-east-2c cpu_core_count = 1 cpu_threads_per_core = 1 credit_specification.# = 1 credit_specification.0.cpu_credits = standard disable_api_termination = false ebs_block_device.# = 0 ebs_optimized = false ephemeral_block_device.# = 0 get_password_data = false iam_instance_profile = instance_state = running instance_type = t2.micro ipv6_address_count = 0 ipv6_addresses.# = 0 key_name = proxycannon monitoring = false network_interface.# = 0 password_data = placement_group = primary_network_interface_id = eni-0591c3301b6bc3edf private_dns = ip-172-31-47-165.us-east-2.compute.internal private_ip = 172.31.47.165 public_dns = ec2-13-59-53-200.us-east-2.compute.amazonaws.com public_ip = 13.59.53.200 root_block_device.# = 1 root_block_device.0.delete_on_termination = true root_block_device.0.iops = 100 root_block_device.0.volume_id = vol-0d585687da4c05044 root_block_device.0.volume_size = 8 root_block_device.0.volume_type = gp2 security_groups.# = 1 security_groups.434247258 = exit-node-sec-group source_dest_check = false subnet_id = subnet-0dbfa140 tags.% = 1 tags.Name = exit-node tenancy = default volume_tags.% = 0 vpc_security_group_ids.# = 1 vpc_security_group_ids.2631189691 = sg-01ba836001bf3c336 aws_instance.exit-node.3: id = i-09412f5a5522dc7f8 ami = ami-0f65671a86f061fcd arn = arn:aws:ec2:us-east-2:069605172168:instance/i-09412f5a5522dc7f8 associate_public_ip_address = true availability_zone = us-east-2c cpu_core_count = 1 cpu_threads_per_core = 1 credit_specification.# = 1 credit_specification.0.cpu_credits = standard disable_api_termination = false ebs_block_device.# = 0 ebs_optimized = false ephemeral_block_device.# = 0 get_password_data = false iam_instance_profile = instance_state = running instance_type = t2.micro ipv6_address_count = 0 ipv6_addresses.# = 0 key_name = proxycannon monitoring = false network_interface.# = 0 password_data = placement_group = primary_network_interface_id = eni-03ace774a372f4ea4 private_dns = ip-172-31-46-205.us-east-2.compute.internal private_ip = 172.31.46.205 public_dns = ec2-18-222-251-198.us-east-2.compute.amazonaws.com public_ip = 18.222.251.198 root_block_device.# = 1 root_block_device.0.delete_on_termination = true root_block_device.0.iops = 100 root_block_device.0.volume_id = vol-0aac3f2213197082f root_block_device.0.volume_size = 8 root_block_device.0.volume_type = gp2 security_groups.# = 1 security_groups.434247258 = exit-node-sec-group source_dest_check = false subnet_id = subnet-0dbfa140 tags.% = 1 tags.Name = exit-node tenancy = default volume_tags.% = 0 vpc_security_group_ids.# = 1 vpc_security_group_ids.2631189691 = sg-01ba836001bf3c336 aws_security_group.exit-node-sec-group: id = sg-01ba836001bf3c336 arn = arn:aws:ec2:us-east-2:069605172168:security-group/sg-01ba836001bf3c336 description = Managed by Terraform egress.# = 1 egress.482069346.cidr_blocks.# = 1 egress.482069346.cidr_blocks.0 = 0.0.0.0/0 egress.482069346.description = egress.482069346.from_port = 0 egress.482069346.ipv6_cidr_blocks.# = 0 egress.482069346.prefix_list_ids.# = 0 egress.482069346.protocol = -1 egress.482069346.security_groups.# = 0 egress.482069346.self = false egress.482069346.to_port = 0 ingress.# = 1 ingress.482069346.cidr_blocks.# = 1 ingress.482069346.cidr_blocks.0 = 0.0.0.0/0 ingress.482069346.description = ingress.482069346.from_port = 0 ingress.482069346.ipv6_cidr_blocks.# = 0 ingress.482069346.prefix_list_ids.# = 0 ingress.482069346.protocol = -1 ingress.482069346.security_groups.# = 0 ingress.482069346.self = false ingress.482069346.to_port = 0 name = exit-node-sec-group owner_id = 069605172168 revoke_rules_on_delete = false tags.% = 0 vpc_id = vpc-8decaee5
(3) default proto static nexthop via 172.31.32.202 dev eth0 weight 100 nexthop via 172.31.46.205 dev eth0 weight 100 nexthop via 172.31.43.74 dev eth0 weight 100 nexthop via 172.31.47.165 dev eth0 weight 100

@ccammilleri
Copy link
Contributor

from a quick glance everything seems to be correct. This leads me to think there is some general networking failure somewhere. I'd revisit the networking setup and make sure the control server is setup correctly. Specifically these commands (do not blindly rerun install.sh):

sysctl -w net.ipv4.ip_forward=1
ip rule add from 10.10.10.0/24 table loadb
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

If the above checks out ok, I'd start gather more basic networking info.
Is traffic making it to the exit nodes? You can test with tcpdumping on the tun interfaces on both control server and exit node.
Can the exit nodes resolve and talk to the internet?
Is SNATing working on the exit nodes?
Make sure your VPN tunnels are not flapping (going up and down).

@asing012
Copy link
Author

I ran those commands and It's working now. Any idea what was happening?

@ccammilleri
Copy link
Contributor

Ya I have one idea. Some of those commands don't persist a reboot. Did you happen to reboot the control server after running install.sh?

@asing012
Copy link
Author

Yes, I think I did. Anyways, thank you for solving this issue.

@ccammilleri
Copy link
Contributor

Thanks for tshooting! I'm going to add this bug as an issue and update the wiki.

@er4z0r
Copy link

er4z0r commented Mar 20, 2020

Hey, I know this issue is closed, but I am still running into problems.

  • I had the setup up and running.
  • Then I did a terraform destroy to see if it was proberly cleaning up my instances.
  • I also stopped the control server
  • I restarted it and spun up 5 (!) nodes
  • I had the connection problems described above.
  • I executed the fixes, described above.

Forwarding is enabled

$sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

A rule for policy-based routing for traffic from the VPN connection exists:

$ip rule list
0:	from all lookup local 
32765:	from 10.10.10.0/24 lookup loadb 
32766:	from all lookup main 
32767:	from all lookup default 

Masquerading is enabled.

$ sudo iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 124 packets, 7440 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 2 packets, 120 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 98 packets, 6944 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 45 packets, 3130 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  175 11134 MASQUERADE  all  --  *      eth0    0.0.0.0/0            0.0.0.0/0  

When checking the exit IPs from my box while connected to the control-server via VPN

$while true;do curl ifconfig.co;done[10:46:28]
3.122.59.13
3.126.245.78
3.122.59.13
3.126.245.78
3.122.59.13
3.126.245.78
...

So I only see two exit IPs where there should be five. Could you please help me diagnose this.
I'd like to avoid going through the setup every time, but keep the control-server instance around in a stopped state and then use it to spin up the exit-nodes in case I need them

@er4z0r
Copy link

er4z0r commented Mar 20, 2020

In the end reviewing the scripts in setup turned up a solution. The file ~/proxycannon-ng/setup/setup-load-balancing.sh contains the following command that was not mentioned above:

# use L4 (src ip, src dport, dest ip, dport) hashing for load balancing instead of L3 (src ip ,dst ip)
echo 1 > /proc/sys/net/ipv4/fib_multipath_hash_policy

After confirming this setting was indeed set to 0 after a fresh start of the control-server,
I re-enabled it and now the load balancer seems to correctly choose from all five nodes.

Hope this helps others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants