Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clean up if/else nest #1

Merged
merged 2 commits into from
Feb 5, 2020
Merged

Conversation

underhood
Copy link

Code cleanup fix the if/else nest -> more readable code

@stelfrag stelfrag merged commit 623f492 into stelfrag:aclk_agent_1 Feb 5, 2020
@underhood underhood deleted the aclk_agent_1b branch February 5, 2020 12:29
stelfrag pushed a commit that referenced this pull request Jul 7, 2022
Fixes reports by helgrind:

```
==00:00:00:01.769 44512== Possible data race during read of size 8 at 0x9767B0 by thread #4
==00:00:00:01.769 44512== Locks held: none
==00:00:00:01.769 44512==    at 0x17CB56: error_log_limit (log.c:627)
==00:00:00:01.769 44512==    by 0x17CEC0: info_int (log.c:716)
==00:00:00:01.769 44512==    by 0x18949F: thread_start (threads.c:173)
==00:00:00:01.769 44512==    by 0x484A486: ??? (in /usr/libexec/valgrind/vgpreload_helgrind-amd64-linux.so)
==00:00:00:01.769 44512==    by 0x4E9CD7F: start_thread (pthread_create.c:481)
==00:00:00:01.769 44512==    by 0x532F76E: clone (clone.S:95)
==00:00:00:01.769 44512==
==00:00:00:01.769 44512== This conflicts with a previous write of size 8 by thread #3
==00:00:00:01.769 44512== Locks held: none
==00:00:00:01.769 44512==    at 0x17CB61: error_log_limit (log.c:627)
==00:00:00:01.769 44512==    by 0x17CEC0: info_int (log.c:716)
==00:00:00:01.769 44512==    by 0x18949F: thread_start (threads.c:173)
==00:00:00:01.769 44512==    by 0x484A486: ??? (in /usr/libexec/valgrind/vgpreload_helgrind-amd64-linux.so)
==00:00:00:01.769 44512==    by 0x4E9CD7F: start_thread (pthread_create.c:481)
==00:00:00:01.769 44512==    by 0x532F76E: clone (clone.S:95)
==00:00:00:01.769 44512==  Address 0x9767b0 is 0 bytes inside data symbol "counter.1"
```

```
==00:00:00:44.536 47685==  Lock at 0x976720 was first observed
==00:00:00:44.536 47685==    at 0x48477EF: ??? (in /usr/libexec/valgrind/vgpreload_helgrind-amd64-linux.so)
==00:00:00:44.536 47685==    by 0x17BBF4: __netdata_mutex_lock (locks.c:86)
==00:00:00:44.536 47685==    by 0x17C514: log_lock (log.c:471)
==00:00:00:44.536 47685==    by 0x17CEC0: info_int (log.c:715)
==00:00:00:44.536 47685==    by 0x458C9E: compute_multidb_diskspace (rrdenginelib.c:279)
==00:00:00:44.536 47685==    by 0x15B170: get_netdata_configured_variables (main.c:671)
==00:00:00:44.536 47685==    by 0x15CE6C: main (main.c:1263)
==00:00:00:44.536 47685==  Address 0x976720 is 0 bytes inside data symbol "log_mutex"
==00:00:00:44.536 47685==
==00:00:00:44.536 47685== Possible data race during write of size 8 at 0x9767A0 by thread #1
==00:00:00:44.536 47685== Locks held: none
==00:00:00:44.536 47685==    at 0x17CB39: error_log_limit (log.c:621)
==00:00:00:44.536 47685==    by 0x15E234: signals_handle (signals.c:258)
==00:00:00:44.536 47685==    by 0x15D880: main (main.c:1534)
==00:00:00:44.536 47685==
==00:00:00:44.536 47685== This conflicts with a previous read of size 8 by thread #9
==00:00:00:44.536 47685== Locks held: 1, at address 0x976720
==00:00:00:44.536 47685==    at 0x17CAA3: error_log_limit (log.c:604)
==00:00:00:44.536 47685==    by 0x17CECA: info_int (log.c:718)
==00:00:00:44.536 47685==    by 0x4624D2: rrdset_done_push (rrdpush.c:344)
==00:00:00:44.536 47685==    by 0x36190C: rrdset_done (rrdset.c:1351)
==00:00:00:44.536 47685==    by 0x1B07E7: Chart::update(unsigned long) (plugin_profile.cc:82)
==00:00:00:44.536 47685==    by 0x1B01D4: updateCharts(std::vector<Chart*, std::allocator<Chart*> >, unsigned long) (plugin_profile.cc:126)
==00:00:00:44.536 47685==    by 0x1B02AC: profile_main (plugin_profile.cc:144)
==00:00:00:44.536 47685==    by 0x1895D4: thread_start (threads.c:185)
==00:00:00:44.536 47685==  Address 0x9767a0 is 0 bytes inside data symbol "start.3"
```
stelfrag pushed a commit that referenced this pull request Dec 9, 2024
* prefer tinysleep over yielding the processor

* split spinlocks to separate files

* rename spinlock initializers

* Optimize ML queuing operations.

- Allocate 25% of cores for ML.
- Split queues by request type.
- Accurate stats for queue operations by type.

* abstracted circular buffer into a new private structure to enable using it in receiver sending side - no features added yet, only abstracted the existing functionality - not tested yet

* completed the abstraction of stream circular buffer

* unified list of receivers and senders; opcodes now support both receivers and senders

* use strings in pluginsd

* stream receivers send data back to the child using the event loop

* do not share pgc aral between caches

* pgc uses 4 to 256 partitions, by default equal to the number of CPU cores

* add forgotten worker job

* workers now monitor spinlock contention

* stream sender tries to lock the sender, but does not wait for it - it will be handled later

* increase the number of web server threads to the number of cpu cores, with a minimum of 6

* use the nowait versions of nd_sock functions

* handle EAGAIN properly

* add spinlock contention tracing for rw_spinlock

* aral lock/unlock contention tracing

* allocate the compressed buffer

* use 128KiB for aral default page size; limit memory protection to 5GiB

* aral uses mmap() for big pages

* enrich log messages

* renamed telemetry to pulse

* unified sender and receiver socket event loops

* logging improvements

* NETDATA_LOG_STREAM_SENDER logs inbound and outbound traffic

* 16k receiver buffer size to improve interactivity

* fix NETDATA_LOG_STREAM_SENDER in sender_execute

* do not stream ML models for charts and dimensions that have not been exposed

* add support for sending QUIT to plugins and waiting for some time for them to quit gracefully

* global spinlock contention per function

* use an aral per pgc partition; use 8 partitions for PGD

* rrdcalc: do not change the frequency of alerts - it uses arbitrary values used during replication, changing permanently the frequency of alerts
replication: use 1/3 of the cores or 1 core every 10 nodes (min of the two)
pgd: use as many aral partitions as the CPU cores, up to 256

* aral does 1 allocation per page (the structure and the elements together), instead of two

* use the evitor thread only when we run out of memory; restore the optimization about prepending or appending clean pages based on their accesses; use the main cache free memory for the other caches, reducing I/O when the main cache has enough room

* reduce the number of events per poll() to 10

* aral allocates pages of up to 1MiB; restore processing 100 events per nd_poll() call

* drain the sockets while reading

* receiver sockets should be non-blocking

* add stability detector to aral

* increase the receivers send buffer

* do not remove the sender or the receiver while we drain the input sockets

---------

Co-authored-by: vkalintiris <vasilis@netdata.cloud>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants