Skip to content

Commit

Permalink
tcp: do not lock listener to process SYN packets
Browse files Browse the repository at this point in the history
Everything should now be ready to finally allow SYN
packets processing without holding listener lock.

Tested:

3.5 Mpps SYNFLOOD. Plenty of cpu cycles available.

Next bottleneck is the refcount taken on listener,
that could be avoided if we remove SLAB_DESTROY_BY_RCU
strict semantic for listeners, and use regular RCU.

    13.18%  [kernel]  [k] __inet_lookup_listener
     9.61%  [kernel]  [k] tcp_conn_request
     8.16%  [kernel]  [k] sha_transform
     5.30%  [kernel]  [k] inet_reqsk_alloc
     4.22%  [kernel]  [k] sock_put
     3.74%  [kernel]  [k] tcp_make_synack
     2.88%  [kernel]  [k] ipt_do_table
     2.56%  [kernel]  [k] memcpy_erms
     2.53%  [kernel]  [k] sock_wfree
     2.40%  [kernel]  [k] tcp_v4_rcv
     2.08%  [kernel]  [k] fib_table_lookup
     1.84%  [kernel]  [k] tcp_openreq_init_rwin

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Eric Dumazet authored and davem330 committed Oct 3, 2015
1 parent 92d6f17 commit e994b2f
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 4 deletions.
11 changes: 9 additions & 2 deletions net/ipv4/tcp_ipv4.c
Original file line number Diff line number Diff line change
Expand Up @@ -1355,7 +1355,7 @@ static struct sock *tcp_v4_cookie_check(struct sock *sk, struct sk_buff *skb)
}

/* The socket must have it's spinlock held when we get
* here.
* here, unless it is a TCP_LISTEN socket.
*
* We have a potential double-lock case here, so even when
* doing backlog processing we use the BH locking scheme.
Expand Down Expand Up @@ -1619,9 +1619,15 @@ int tcp_v4_rcv(struct sk_buff *skb)
if (sk_filter(sk, skb))
goto discard_and_relse;

sk_incoming_cpu_update(sk);
skb->dev = NULL;

if (sk->sk_state == TCP_LISTEN) {
ret = tcp_v4_do_rcv(sk, skb);
goto put_and_return;
}

sk_incoming_cpu_update(sk);

bh_lock_sock_nested(sk);
tcp_sk(sk)->segs_in += max_t(u16, 1, skb_shinfo(skb)->gso_segs);
ret = 0;
Expand All @@ -1636,6 +1642,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
}
bh_unlock_sock(sk);

put_and_return:
sock_put(sk);

return ret;
Expand Down
11 changes: 9 additions & 2 deletions net/ipv6/tcp_ipv6.c
Original file line number Diff line number Diff line change
Expand Up @@ -1161,7 +1161,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
}

/* The socket must have it's spinlock held when we get
* here.
* here, unless it is a TCP_LISTEN socket.
*
* We have a potential double-lock case here, so even when
* doing backlog processing we use the BH locking scheme.
Expand Down Expand Up @@ -1415,9 +1415,15 @@ static int tcp_v6_rcv(struct sk_buff *skb)
if (sk_filter(sk, skb))
goto discard_and_relse;

sk_incoming_cpu_update(sk);
skb->dev = NULL;

if (sk->sk_state == TCP_LISTEN) {
ret = tcp_v6_do_rcv(sk, skb);
goto put_and_return;
}

sk_incoming_cpu_update(sk);

bh_lock_sock_nested(sk);
tcp_sk(sk)->segs_in += max_t(u16, 1, skb_shinfo(skb)->gso_segs);
ret = 0;
Expand All @@ -1432,6 +1438,7 @@ static int tcp_v6_rcv(struct sk_buff *skb)
}
bh_unlock_sock(sk);

put_and_return:
sock_put(sk);
return ret ? -1 : 0;

Expand Down

0 comments on commit e994b2f

Please sign in to comment.