Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement auto synchronization #8

Open
rynffoll opened this issue Feb 17, 2017 · 90 comments
Open

Implement auto synchronization #8

rynffoll opened this issue Feb 17, 2017 · 90 comments

Comments

@rynffoll
Copy link

Please, add auto synchronization for remote repositories.
Manual synchronization isn't comfy. also sometimes it could be cause of synchronization conflicts.

@ghost
Copy link

ghost commented Mar 22, 2017

If you have tasker and a rooted phone, try running this in tasker via a shell command (it'll force a sync):
su-c 'am startservice com.orgzly/com.orgzly.android.sync.SyncService'
To stop the service (may need to do this before starting again):
su-c 'am stopservice com.orgzly/com.orgzly.android.sync.SyncService'

@nevenz
Copy link
Member

nevenz commented Mar 23, 2017

If you have tasker and a rooted phone, try running this in tasker via a shell command (it'll force a sync):
su-c 'am startservice com.orgzly/com.orgzly.android.sync.SyncService'

Service can be made "exported", so it doesn't require root to start, if that would help, short(-ish)-term.

@ghost
Copy link

ghost commented Mar 23, 2017

It would be lovely to have this exported (even with auto-sync). Thanks.

@anderspapitto
Copy link

I'd like to provide my usecase and desired UX as an example

I run orgzly against a folder in local storage, which (without any orgzly integration) I have synced to my computer with Syncthing.
I'm careful to not make conflicting updates - I have one org file which I update from my phone. The rest I update only on my computer, and orgzly is read-only for those files. So, I never get conflicts, but I do have to press the sync button every so often.

Ideally (for me) orgzly would just attempt a sync every so often (say hourly). If it works, great. If it would produce a conflict, skip the sync entirely (i.e. don't make the multiple conflicted versions), and just give a notification that says "automatic sync failed. push the sync button whenever you want and resolve your conflicts manually"

@ghost
Copy link

ghost commented Mar 23, 2017

Anderspapitto, I am currently using tasker to do a periodic sync as you suggest and foldersync to handle the remote sync and file conflict resolution. So in at least 1 form, what you are asking for can be done already. :)

nevenz added a commit that referenced this issue Mar 24, 2017
Added new actions for explicit control of SyncService. Toggle (stop if
running, start if stopped) is still default, if no action is
specified.

Service is also exported now, so it can be started by other apps, such
as Tasker (mentioned in #8).
@nevenz
Copy link
Member

nevenz commented Mar 24, 2017

I've also added actions for better control, so what happens doesn't depend on current status:

am startservice -a com.orgzly.intent.action.SYNC_START com.orgzly/com.orgzly.android.sync.SyncService
am startservice -a com.orgzly.intent.action.SYNC_STOP com.orgzly/com.orgzly.android.sync.SyncService

If sync is already running, sending SYNC_START does nothing. If not, sending SYNC_STOP does nothing.

Toggling remains the default if no action is specified or recognized.

@ghost
Copy link

ghost commented Mar 24, 2017

This is great! It is very useful to do this without needing root. Anyone having tasker that wants autosync should be very happy. Do you know when this may be released? Not pushing, just curious.

@nevenz
Copy link
Member

nevenz commented Mar 25, 2017

Do you know when this may be released?

Week or two from now probably.

@alphapapa
Copy link

Would it be possible to implement a kind of on-demand auto-sync, so that when a file is loaded or when Orgzly regains focus, it would quickly check to see if the remote copy has been modified and sync that file if necessary?

As it is now, I have several Org files synced to Dropbox, and Orgzly takes a long time to sync all of them.
If I have to wait 1-2 minutes for it to sync every file before I can load the one file I need, it means that there are lots of situations in which Orgzly simply isn't useful. But if it could just sync the one file I'm loading, it would be much faster.

Thanks.

@ghost
Copy link

ghost commented Apr 7, 2017

I think the golden standard for autosync (for me anyway) is Simplenote. I never have to do anything, and everything stays in sync.

In fact, "getting out of people's way" is something Simplenote does really well. Notice the difference in workflows:

In Orgzly:

  • Click Notebook
  • Click Note
  • Click "Edit Content" *
  • Type
  • Click "Close" (w/Save -- the checkmark) *
  • Click Sync *

In SimpleNote:

  • Click Note
  • Type

Big difference!

* Why do I have to do this?

I wish Simplenote and Orgzly would have a baby. It would be the most beautiful child in the world to me, and it's uncle Emacs would definitely support it!

@nevenz
Copy link
Member

nevenz commented Apr 8, 2017

Click "Edit Content" *
* Why do I have to do this?

For the same reason we have "Write" and "Preview" here (GitHub comments editor). 😄

Currently, only links are supported ([[link][description]]), but in the future, we'll have *bold*, /italic/, lists, etc.

"Write" mode is default when you're creating new note, "Preview" when editing the existing one, as a attempt to speedup things.

Suggestions for improving UX welcome. Perhaps the button can be renamed for start?

@ghost
Copy link

ghost commented Apr 22, 2017

If we are not using a rooted phone, how can we run these commands?

am startservice -a com.orgzly.intent.action.SYNC_START com.orgzly/com.orgzly.android.sync.SyncService am startservice -a com.orgzly.intent.action.SYNC_STOP com.orgzly/com.orgzly.android.sync.SyncService

@licaon-kter
Copy link
Contributor

Use an automation tool like Tasker, you don't need root for these commands.

Be aware that it might complain about the user this runs on, so make it say: am startservice --user 0 -a ... and the same for stop.

@ghost
Copy link

ghost commented Apr 22, 2017

Oh, thanks, @licaon-kter ! The user error was the issue I was having! (I was running this from termux.)

@licaon-kter
Copy link
Contributor

licaon-kter commented Apr 22, 2017

@nevenz After testing that in Termux (start and then stop), I've look at the app to see that it synced, it did, but after exiting the app I got 2 crashes: https://gist.github.com/licaon-kter/346c8f7012dc61b7db30f99401808e4e

I'll try to look at the bigger log soon, maybe see what happened before the actual crash.

nevenz added a commit that referenced this issue Apr 24, 2017
Still handle null even though START_STICKY is not used anymore.
Fixes issue mentioned in #8.
@nevenz
Copy link
Member

nevenz commented Apr 24, 2017

@licaon-kter Thanks, fixed.

@timoc
Copy link

timoc commented Apr 30, 2017

I would prefer to offer a different strategy - Git branching. Whenever you make changes, it is committed to an orgzly branch, and visa versa. This implementation allows for more fun things, like having a limited subest of a larger git repo being under orgzly. Whenever there is a conflict, it can automatically create a new branch. Its then up to the repo-owner to tidy everything up.

@mkaito
Copy link

mkaito commented Apr 30, 2017

If you were to use git as a backend, which is an idea I definitely like, it can probably solve a lot of conflicts itself.

@ghost
Copy link

ghost commented Apr 30, 2017

I like the idea of git for merge conflict resolution.

I use termux. I have it running orgzly sync via cron (stop then start). I also have setup inotifyd to watch my repository directory, and when notes change, it runs a script that commits to git, which I also setup in that directory. I am using syncthing to sync the whole directory between my machines. Finally, I am using rclone via cron to 'backup' (end-to-end encrypted) the directory (actually the parent with lots of other non-note files in it) to gdrive, and, using a sneaky trick on my always-on desktop of sharing directories, duplicating that on yandesk.disk and mega.

Sorry. TMI, I guess.

Anyway, I want concurrency, privacy, security, redundancy, integrity, and as much automation as possible in my setup. So, automating sync would still help.

@angrybacon
Copy link

angrybacon commented Apr 30, 2017

This discussion is becoming a little long, I'm not sure someone has mentioned Drive Sync already.

https://play.google.com/store/apps/details?id=com.ttxapps.drivesync

I like the way each folder can have its own strategy. More details here https://metactrl.com/userguide/?app=drivesync#folder-pair (all the options are detailed at the end).

@alphapapa
Copy link

This discussion is becoming a little long, I'm not sure someone has mentioned Drive Sync already.

https://play.google.com/store/apps/details?id=com.ttxapps.drivesync

Um, well...:

Contains ads · Offers in-app purchases

D:

@angrybacon
Copy link

@alphapapa you can just read the second link, no need to install. :-)

@pellenilsson
Copy link

My take on the sync-outside-orgzly strategy, here driving it from the PC instead of from the mobile:

https://pantarei.xyz/posts/sync-org-mode-with-mobile/

@harshitgarg22
Copy link

+1, are there any updates on this issue?

@NiceFeather
Copy link

What I want to say is that after the loss of the network, the changes made to notebook, the automatic synchronization problem after re-linking to WIFI, is a special case of this issue.

I just did some tests on the support of LAN WebDAV server after re-linking to WIFI automatic synchronization function. First of all, in the case of interruption with WIFI, I made some changes to the notebook, after the phone is reconnected to the WIFI, the phone is not automatically synchronized with the local area network WebDAV server. It is suggested that after the device automatically obtains the WIFI signal, the synchronization attempt should be made and the message of the synchronization result should be given.

@nevenz
Copy link
Member

nevenz commented Oct 27, 2019

after the phone is reconnected to the WIFI, the phone is not automatically synchronized

Yeah, this is another good case to handle.

@nevenz
Copy link
Member

nevenz commented Oct 27, 2019

Summary of things to do to improve auto-sync:

  • Periodically check for changes in repositories
  • Enable auto-sync for all repository types
    • Queue sync requests to avoid running sync too often (for Dropbox API)
    • Consider adding per-repo option (except Dropbox) or just enable it for WebDAV
  • Check for remote changes before local modification (FR: auto sync without interaction #434)
  • Send sync request when connection is re-established

@Nebucatnetzer
Copy link

Sounds very reasonable to me.

@MartinX3
Copy link

In my opinion the the merge conflict handling needs a good strategy. :)

  1. You accidentally could do changes on 2 devices without syncing first.
  2. Or one of the devices could have a wrong system time by accident or any other reason.

@Nebucatnetzer
Copy link

A solid system would be nice for sure. I think however that it is better to have something basic quickly would be better. If it syncs every time the app opens would already be a huge improvement since you wouldn't be editing old files on your phone.

@MartinX3
Copy link

Yeah, a sync on every save / opening since it's not intended as multi user application.

@nevenz
Copy link
Member

nevenz commented Oct 27, 2019

In my opinion the the merge conflict handling needs a good strategy. :)

Conflicts can always happen, no matter what services are used, auto-syncing just minimises the chance of them happening. I don't think there's an issue opened for handling conflict resolution.

If it syncs every time the app opens would already be a huge improvement since you wouldn't be editing old files on your phone.

This is already implemented, but only enabled for Directory repository type. Perhaps it should have been enabled for WebDAV too, as the restriction was added to limit hitting Dropbox API. Something will be done for v1.8.1.

nevenz added a commit that referenced this issue Nov 2, 2019
@nevenz nevenz mentioned this issue Feb 24, 2020
1 task
@alensiljak alensiljak mentioned this issue Feb 27, 2021
1 task
@doak
Copy link

doak commented Feb 28, 2021

That's quite a hot and long requested feature ;)

TL;DR
This is about a workaround using inotifyd to trigger Orgzly's sync.

Some days back in 2017 from another user:

I use termux. I have it running orgzly sync via cron (stop then start). I also have setup inotifyd to watch my repository directory, and when notes change, it runs a script that commits to git, which I also setup in that directory.

I just stumbled over this after I scripted something similar and wanted to leave a note here ;)

My script does the following:

  • Listen on file changes in specified directories using inotifyd.
    (This does not work recursively, hence in case of Orgzly, you need to specify every directory you sync with.)
  • Trigger synchronisation of Orgzly on "every" relevant change. (It debounces a little bit).

For now it seems to work quite well.
Orgzly itself is configured to automatically sync in case of own changes, but it is disabled on App started or resumed. This is not necessary anymore and disturbs after all in this case.
The syncing to the device is done using Syncthing which does a quite good job here. The notes get updated within 20s latest usually. This also works for the Widget btw ;)
It uses Termux and uses Termux:Boot to start-up automatically. Battery usage seems okayish.

I've also added safety checks in case the changes respectively the sync trigger goes crazy (and the debouncing would not work) which stops the sync for some time (and opens a notification to inform about that). Better handling of the script, like manually stop or start is not available yet.

If anybody is interested, the code is available here:
https://gitlab.com/doak/orgzly-watcher

@lytex
Copy link

lytex commented Mar 6, 2021

That's quite a hot and long requested feature ;)

I bet it is!

I've also been using for some months inotifywait inside termux, which does support recursive scans (see the exact command I use, which also excludes the .git folder). I use this specific script to commit automatically when there is a file change.

The recommended intent is a broadcast instead of SyncService directly. I have experienced some inconsistencies using SyncService on Android 10 (it's not always being triggered ok).

I also have battery issues (faster sync means more battery drain), but I don't mind this high battery usage very much.

@xeruf
Copy link

xeruf commented Jan 3, 2023

Isn't this basically fixed now with the various autosync options?
Or superseded by #434

@doak
Copy link

doak commented Jan 3, 2023

Isn't this basically fixed now with the various autosync options?

I don't think so. No option allows to trigger the sync if and only if something changed remotely (push trigger), afaik.

@doak
Copy link

doak commented Jan 3, 2023

If anybody is interested, the code is available here:
https://gitlab.com/doak/orgzly-watcher

Btw, I have updated the code to work on Android 13 and current Orgzly version.
It still works great in combination with Syncthing. I need to document all the dependencies which needs to be installed in Termux, though.

@xeruf
Copy link

xeruf commented Jan 4, 2023

What is the advantage over just syncing upon opening the app?

@daraul
Copy link

daraul commented Jan 4, 2023

One advantage is that orgzly will know that I've completed a task without me having to open the app myself, and will therefore not give me a notification when the task is due. At least I think that's how it will work.

@JimBreton
Copy link

JimBreton commented Jan 4, 2023 via email

@xeruf
Copy link

xeruf commented Jan 12, 2023

Yes I see, for example when your phone is online but you only open Orgzly when you are offline again.

@JimBreton
Copy link

JimBreton commented Jan 12, 2023 via email

@fabian-thomas
Copy link

Multiple people before discussed how to solve this problem for files synced via tools like syncthing. I've collected parts from all of the solutions posted above and unified them into a short and simple script. Check out the gist. I hope that someone finds the time to fix this issue natively in the future.

lyz-code added a commit to lyz-code/blue-book that referenced this issue Nov 10, 2023
…ards

You have three options:

- Suspend: It stops it from showing up permanently until you reactivate it through the browser.
- Bury: Just delays it until the next day.
- Delete: It deletes it forever.

Unless you're certain that you are not longer going to need it, suspend it.

feat(anki#Configure self hosted synchronization): Configure self hosted synchronization

Explain how to install `anki-sync-server` and how to configure Ankidroid
and Anki. In the end I dropped this path and used Ankidroid alone with
syncthing as I didn't need to interact with the decks from the computer. Also the ecosystem of synchronization in Anki at 2023-11-10 is confusing as there are many servers available, not all are compatible with the clients and Anki itself has released it's own so some of the community ones will eventually die.

feat(bash_snippets#Loop through a list of files found by find): Loop through a list of files found by find

For simple loops use the `find -exec` syntax:

```bash
find . -name '*.txt' -exec process {} \;
```

For more complex loops use a `while read` construct:

```bash
find . -name "*.txt" -print0 | while read -r -d $'\0' file
do
    …code using "$file"
done
```

The loop will execute while the `find` command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.

The `-print0` will use the NULL as a file separator instead of a newline and the `-d $'\0'` will use NULL as the separator while reading.

How not to do it:

If you try to run the next snippet:

```bash
for file in $(find . -name "*.txt")
do
    …code using "$file"
done
```

You'll get the next [`shellcheck`](shellcheck.md) warning:

```
SC2044: For loops over find output are fragile. Use find -exec or a while read loop.
```

You should not do this because:

Three reasons:

- For the for loop to even start, the find must run to completion.
- If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
- Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.

feat(pytest#Stop pytest right at the start if condition not met): Stop pytest right at the start if condition not met

Use the `pytest_configure` [initialization hook](https://docs.pytest.org/en/4.6.x/reference.html#initialization-hooks).

In your global `conftest.py`:

```python
import requests
import pytest

def pytest_configure(config):
    try:
        requests.get(f'http://localhost:9200')
    except requests.exceptions.ConnectionError:
        msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)'
        pytest.exit(msg)
```

- Note that the single argument of `pytest_configure` has to be named `config`.
- Using `pytest.exit` makes it look nicer.

feat(python_docker#Using PDM): Dockerize a PDM application

It is possible to use PDM in a multi-stage Dockerfile to first install the project and dependencies into `__pypackages__` and then copy this folder into the final stage, adding it to `PYTHONPATH`.

```dockerfile
FROM python:3.11-slim-bookworm AS builder

RUN pip install pdm

COPY pyproject.toml pdm.lock README.md /project/
COPY src/ /project/src

WORKDIR /project
RUN mkdir __pypackages__ && pdm sync --prod --no-editable

FROM python:3.11-slim-bookworm

ENV PYTHONPATH=/project/pkgs
COPY --from=builder /project/__pypackages__/3.11/lib /project/pkgs

COPY --from=builder /project/__pypackages__/3.11/bin/* /bin/

CMD ["python", "-m", "project"]
```

feat(python_snippets#Configure the logging of a program to look nice): Configure the logging of a program to look nice

```python
def load_logger(verbose: bool = False) -> None:  # pragma no cover
    """Configure the Logging logger.

    Args:
        verbose: Set the logging level to Debug.
    """
    logging.addLevelName(logging.INFO, "\033[36mINFO\033[0m")
    logging.addLevelName(logging.ERROR, "\033[31mERROR\033[0m")
    logging.addLevelName(logging.DEBUG, "\033[32mDEBUG\033[0m")
    logging.addLevelName(logging.WARNING, "\033[33mWARNING\033[0m")

    if verbose:
        logging.basicConfig(
            format="%(asctime)s %(levelname)s %(name)s: %(message)s",
            stream=sys.stderr,
            level=logging.DEBUG,
            datefmt="%Y-%m-%d %H:%M:%S",
        )
        telebot.logger.setLevel(logging.DEBUG)  # Outputs debug messages to console.
    else:
        logging.basicConfig(
            stream=sys.stderr, level=logging.INFO, format="%(levelname)s: %(message)s"
        )
```

feat(python_snippets#Get the modified time of a file with Pathlib): Get the modified time of a file with Pathlib

```python
file_ = Path('/to/some/file')
file_.stat().st_mtime
```

You can also access:

- Created time: with `st_ctime`
- Accessed time: with `st_atime`

They are timestamps, so if you want to compare it with a datetime object use the `timestamp` method:

```python
assert datetime.now().timestamp - file_.stat().st_mtime < 60
```

feat(collaborating_tools): Introduce collaborating tools

Collaborating document creation:

- https://pad.riseup.net
- https://rustpad.io . [Can be self hosted](https://github.com/ekzhang/rustpad)

Collaborating through terminals:

- [sshx](https://sshx.io/) looks promising although I think it uses their servers to do the connection, which is troublesome.

fix(kubernetes_tools#Tried): Recommend rke2 over k3s

A friend told me that it works better.

feat(emojis#Most used): Create a list of most used emojis

```
¯\(°_o)/¯

¯\_(ツ)_/¯

(╯°□°)╯ ┻━┻

\\ ٩( ᐛ )و //

(✿◠‿◠)

(/゚Д゚)/

(¬º-°)¬

(╥﹏╥)

ᕕ( ᐛ )ᕗ

ʕ•ᴥ•ʔ

( ˘ ³˘)♥

❤
```

feat(gitea#Run jobs if other jobs failed): Run jobs if other jobs failed

This is useful to send notifications if any of the jobs failed.

[Right now](go-gitea/gitea#23725) you can't run a job if other jobs fail, all you can do is add a last step on each workflow to do the notification on failure:

```yaml
- name: Send mail
    if: failure()
    uses: https://github.com/dawidd6/action-send-mail@v3
    with:
        to: ${{ secrets.MAIL_TO }}
        from: Gitea <gitea@hostname>
        subject: ${{ gitea.repository }} ${{gitea.workflow}} ${{ job.status }}
        priority: high
        convert_markdown: true
        html_body: |
            ### Job ${{ job.status }}

            ${{ github.repository }}: [${{ github.ref }}@${{ github.sha }}](${{ github.server_url }}/${{ github.repository }}/actions)
```

feat(grapheneos#Split the screen): Split the screen

Go into app switcher, tap on the app icon above the active app and then select "Split top".

feat(how_to_code): Personal evolution on how I code

Over the years I've tried different ways of developing my code:

- Mindless coding: write code as you need to make it work, with no tests, documentation or any quality measure.
- TDD.
- Try to abstract everything to minimize the duplication of code between projects.

Each has it's advantages and disadvantages. After trying them all and given that right now I only have short spikes of energy and time to invest in coding my plan is to:

- Make the minimum effort to design the minimum program able to solve the problem at hand. This design will be represented in an [orgmode](orgmode.md) task.
- Write the minimum code to make it work without thinking of tests or generalization, but with the [domain driven design](domain_driven_design.md) concepts so the code remains flexible and maintainable.
- Once it's working see if I have time to improve it:
  - Create the tests to cover the critical functionality (no more 100% coverage).
  - If I need to make a package or the program evolves into something complex I'd use [this scaffold template](https://github.com/lyz-code/cookiecutter-python-project).

Once the spike is over I'll wait for a new spike to come either because I have time or because something breaks and I need to fix it.

feat(life_analysis): Introduce the analysis of life process

It's interesting to do analysis at representative moments of the year. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices:

- Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track.
- Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop *easy* and *chill* personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you.
- Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves.
- Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year.

We see then that the year is divided in two sets of an expansion trimester and a retreat one. We can use this information to plan our tasks accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews.

feat(life_planning#month-plan): Introduce the month planning process

The objectives of the month plan are:

- Define the month objectives according to the trimester plan and the insights gathered in the past month review.
- Make your backlog and todo list match the month objectives
- Define the philosophical topics to address
- Define the topics to learn
- Define the are of habits to incorporate?
- Define the checks you want to do at the end of the month.
- Plan when is it going to be the next review.

It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day.

We'll divide the planning process in these phases:

- Prepare
- Clarify your state
- Decide the month objectives

Prepare:

It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can:

- Make sure you don't get interrupted:
    - Check your task manager tools to make sure that you don't have anything urgent to address in the next hour.
    - Disable all notifications
- Set your analysis environment:
    - Put on the music that helps you get *in the zone*.
    - Get all the things you may need for the review:
        - The checklist that defines the process of your planning (this document in my case).
        - Somewhere to write down the insights.
        - Your task manager system
        - Your habit manager system
        - Your *Objective list*.
        - Your *Thinking list*.
        - Your *Reading list*.
    - Remove from your environment everything else that may distract you

Clarify your state:

To be able to make a good decision on your month's path you need to sort out which is your current state. To do so:

- Clean your inbox: Refile each item until it's empty
- Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it.
- Clean your someday: Review each relevant someday element (not the ones that are archive at greater levels than month) and decide if they should be refiled elsewhere and if they are part of a month objective that should be dealt with this month.
- Adress each of the trimester objectives by creating month objectives that get you closer to the desired objective.

Decide the next steps:

For each of your month objectives:

- Decide wheter it makes sense to address it this month. If not, archive it
- Create a clear plan of action for this month on that objective
- Tweak your *things to think about list*.
- Tweak your *reading list*.
- Tweak your *habit manager system*.

feat(linux_snippets#Accept new ssh keys by default): Accept new ssh keys by default

While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).

This is done with `-o StrictHostKeyChecking=accept-new`. Or if you want to use it for all hosts you can add the next lines to your `~/.ssh/config`:

```
Host *
  StrictHostKeyChecking accept-new
```

WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:

```bash
ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com
```

Note, `StrictHostKeyChecking=no` will add the public key to `~/.ssh/known_hosts` even if the key was changed. `accept-new` is only for new hosts. From the man page:

> If this flag is set to “accept-new” then ssh will automatically add new host keys to the user known hosts files, but will not permit connections to hosts with changed host keys. If this flag is set to “no” or “off”, ssh will automatically add new host keys to the user known hosts files and allow connections to hosts with changed hostkeys to proceed, subject to some restrictions. If this flag is set to ask (the default), new host keys will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to hosts whose host key has changed. The host keys of known hosts will be verified automatically in all cases.

feat(linux_snippets#Do not add trailing / to ls): Do not add trailing / to ls

Probably, your `ls` is aliased or defined as a function in your config files.

Use the full path to `ls` like:

```bash
/bin/ls /var/lib/mysql/
```

feat(linux_snippets#Convert png to svg): Convert png to svg

Inkscape has got an awesome auto-tracing tool.

- Install Inkscape using `sudo apt-get install inkscape`
- Import your image
- Select your image
- From the menu bar, select Path > Trace Bitmap Item
- Adjust the tracing parameters as needed
- Save as svg

Check their [tracing tutorial](https://inkscape.org/en/doc/tutorials/tracing/tutorial-tracing.html) for more information.

Once you are comfortable with the tracing options. You can automate it by using [CLI of Inkscape](https://inkscape.org/en/doc/inkscape-man.html).

feat(linux_snippets#Redirect stdout and stderr of a cron job to a file): Redirect stdout and stderr of a cron job to a file

```
*/1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1
```

feat(linux_snippets#Error when unmounting a device Target is busy): Error when unmounting a device Target is busy

- Check the processes that are using the mountpoint with `lsof /path/to/mountpoint`
- Kill those processes
- Try the umount again

If that fails, you can use `umount -l`.

feat(loki#installation): How to install loki

There are [many ways to install Loki](https://grafana.com/docs/loki/latest/setup/install/), we're going to do it using `docker-compose` taking [their example as a starting point](https://raw.githubusercontent.com/grafana/loki/v2.9.1/production/docker-compose.yaml) and complementing our already existent [grafana docker-compose](grafana.md#installation).

It makes use of the [environment variables to configure Loki](https://grafana.com/docs/loki/latest/configure/#configuration-file-reference), that's why we have the `-config.expand-env=true` flag in the command line launch.

In the grafana datasources directory add `loki.yaml`:

```yaml
---
apiVersion: 1

datasources:
  - name: Loki
    type: loki
    access: proxy
    orgId: 1
    url: http://loki:3100
    basicAuth: false
    isDefault: true
    version: 1
    editable: false
```

[Storage configuration](https://grafana.com/docs/loki/latest/storage/):

Unlike other logging systems, Grafana Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

Loki 2.0 brings an index mechanism named ‘boltdb-shipper’ and is what we now call Single Store. This type only requires one store, the object store, for both the index and chunks.

Loki 2.8 adds TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki as it improves query performance, reduces TCO and has the same feature parity as “boltdb-shipper”.

feat(orgzly#Avoid the conflicts in the files edited in two places): Avoid the conflicts in the files edited in two places

If you use syncthing you may be seeing conflicts in your files. This happens specially if you use the Orgzly widget to add tasks, this is because it doesn't synchronize the files to the directory when using the widget. If you have a file that changes a lot in a device, for example the `inbox.org` of my mobile, it's interesting to have a specific file that's edited mainly in the mobile, and when you want to edit it elsewhere, you sync as specified below and then process with the editing. Once it's done manually sync the changes in orgzly again. The rest of the files synced to the mobile are for read only reference, so they rarely change.

If you want to sync reducing the chance of conflicts then:

- Open Orgzly and press Synchronize
- Open Syncthing.

If that's not enough [check these automated solutions](orgzly/orgzly-android#8):

- [Orgzly auto syncronisation for sync tools like syncthing](https://gist.github.com/fabian-thomas/6f559d0b0d26737cf173e41cdae5bfc8)
- [watch-for-orgzly](https://gitlab.com/doak/orgzly-watcher/-/blob/master/watch-for-orgzly?ref_type=heads)

Other interesting solutions:

- [org-orgzly](https://codeberg.org/anoduck/org-orgzly): Script to parse a chosen org file or files, check if an entry meets required parameters, and if it does, write the entry in a new file located inside the folder you desire to sync with orgzly.
- [Git synchronization](orgzly/orgzly-android#24): I find it more cumbersome than syncthing but maybe it's interesting for you.

feat(orgzly#references): add new orgzly fork

[Alternative fork maintained by the community](https://github.com/orgzly-revived/orgzly-android-revived)

feat(pytelegrambotapi): Introduce pytelegrambotapi

[pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI) is an synchronous and asynchronous implementation of the [Telegram Bot API](https://core.telegram.org/bots/api).

[Installation](https://pytba.readthedocs.io/en/latest/install.html):

```bash
pip install pyTelegramBotAPI
```

feat(pytelegrambotapi#Create your bot): Create your bot

Use the `/newbot` command to create a new bot. `@BotFather` will ask you for a name and username, then generate an authentication token for your new bot.

- The `name` of your bot is displayed in contact details and elsewhere.
- The `username` is a short name, used in search, mentions and t.me links. Usernames are 5-32 characters long and not case sensitive – but may only include Latin characters, numbers, and underscores. Your bot's username must end in 'bot’, like `tetris_bot` or `TetrisBot`.
- The `token` is a string, like `110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw`, which is required to authorize the bot and send requests to the Bot API. Keep your token secure and store it safely, it can be used by anyone to control your bot.

To edit your bot, you have the next available commands:

- `/setname`: change your bot's name.
- `/setdescription`: change the bot's description (short text up to 512 characters). Users will see this text at the beginning of the conversation with the bot, titled 'What can this bot do?'.
- `/setabouttext`: change the bot's about info, a shorter text up to 120 characters. Users will see this text on the bot's profile page. When they share your bot with someone, this text is sent together with the link.
- `/setuserpic`: change the bot's profile picture.
- `/setcommands`: change the list of commands supported by your bot. Users will see these commands as suggestions when they type / in the chat with your bot. See commands for more info.
- `/setdomain`: link a website domain to your bot. See the login widget section.
- `/deletebot`: delete your bot and free its username. Cannot be undone.

feat(pytelegrambotapi#Synchronous TeleBot): Synchronous TeleBot

```python

import telebot

API_TOKEN = '<api_token>'

bot = telebot.TeleBot(API_TOKEN)

@bot.message_handler(commands=['help', 'start'])
def send_welcome(message):
    bot.reply_to(message, """\
Hi there, I am EchoBot.
I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\
""")

@bot.message_handler(func=lambda message: True)
def echo_message(message):
    bot.reply_to(message, message.text)

bot.infinity_polling()
```

feat(pytelegrambotapi#Asynchronous TeleBot): Asynchronous TeleBot

```python

from telebot.async_telebot import AsyncTeleBot
bot = AsyncTeleBot('TOKEN')

@bot.message_handler(commands=['help', 'start'])
async def send_welcome(message):
    await bot.reply_to(message, """\
Hi there, I am EchoBot.
I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\
""")

@bot.message_handler(func=lambda message: True)
async def echo_message(message):
    await bot.reply_to(message, message.text)

import asyncio
asyncio.run(bot.polling())
```

feat(pytest-xprocess): Introduce pytest-xprocess

[`pytest-xprocess`](https://github.com/pytest-dev/pytest-xprocess) is a pytest plugin for managing external processes across test runs.

[Installation](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart):

```bash
pip install pytest-xprocess
```

[Usage](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart):

Define your process fixture in `conftest.py`:

```python
import pytest
from xprocess import ProcessStarter

@pytest.fixture
def myserver(xprocess):
    class Starter(ProcessStarter):
        # startup pattern
        pattern = "[Ss]erver has started!"

        # command to start process
        args = ['command', 'arg1', 'arg2']

    # ensure process is running and return its logfile
    logfile = xprocess.ensure("myserver", Starter)

    conn = # create a connection or url/port info to the server
    yield conn

    # clean up whole process tree afterwards
    xprocess.getinfo("myserver").terminate()
```

Now you can use this fixture in any test functions where `myserver` needs to be up and `xprocess` will take care of it for you.

[Matching process output with pattern](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#matching-process-output-with-pattern):

In order to detect that your process is ready to answer queries,
`pytest-xprocess` allows the user to provide a string pattern by setting the
class variable pattern in the Starter class. `pattern` will be waited for in
the process `logfile` for a maximum time defined by `timeout` before timing out in
case the provided pattern is not matched.

It’s important to note that pattern is a regular expression and will be matched using python `re.search`.

[Controlling Startup Wait Time with timeout](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#controlling-startup-wait-time-with-timeout):

Some processes naturally take longer to start than others. By default,
`pytest-xprocess` will wait for a maximum of 120 seconds for a given process to
start before raising a `TimeoutError`. Changing this value may be useful, for
example, when the user knows that a given process would never take longer than
a known amount of time to start under normal circunstancies, so if it does go
over this known upper boundary, that means something is wrong and the waiting
process must be interrupted. The maximum wait time can be controlled through the
class variable timeout.

```python
   @pytest.fixture
   def myserver(xprocess):
       class Starter(ProcessStarter):
           # will wait for 10 seconds before timing out
           timeout = 10

```

Passing command line arguments to your process with `args`:

In order to start a process, pytest-xprocess must be given a command to be passed into the subprocess.Popen constructor. Any arguments passed to the process command can also be passed using args. As an example, if I usually use the following command to start a given process:

```bash
$> myproc -name "bacon" -cores 4 <destdir>
```

That would look like:

```python
args = ['myproc', '-name', '"bacon"', '-cores', 4, '<destdir>']
```

when using args in pytest-xprocess to start the same process.

```python
@pytest.fixture
def myserver(xprocess):
    class Starter(ProcessStarter):
        # will pass "$> myproc -name "bacon" -cores 4 <destdir>"  to the
        # subprocess.Popen constructor so the process can be started with
        # the given arguments
        args = ['myproc', '-name', '"bacon"', '-cores', 4, '<destdir>']

        # ...
```

feat(python_prometheus): How to create a prometheus exporter with python

[prometheus-client](https://github.com/prometheus/client_python) is the official Python client for [Prometheus](prometheus.md).

Installation:

```bash
pip install prometheus-client
```

Here is a simple script:

```python
from prometheus_client import start_http_server, Summary
import random
import time

REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')

@REQUEST_TIME.time()
def process_request(t):
    """A dummy function that takes some time."""
    time.sleep(t)

if __name__ == '__main__':
    # Start up the server to expose the metrics.
    start_http_server(8000)
    # Generate some requests.
    while True:
        process_request(random.random())
```

Then you can visit http://localhost:8000/ to view the metrics.

From one easy to use decorator you get:

- `request_processing_seconds_count`: Number of times this function was called.
- `request_processing_seconds_sum`: Total amount of time spent in this function.

Prometheus's rate function allows calculation of both requests per second, and latency over time from this data.

In addition if you're on Linux the process metrics expose CPU, memory and other information about the process for free.

feat(python-telegram): Analyze the different python libraries to interact with telegram

There are two ways to interact with Telegram through python:

- Client libraries
- Bot libraries

Client libraries:

Client libraries use your account to interact with Telegram itself through a developer API token.

The most popular to use is [Telethon](https://docs.telethon.dev/en/stable/index.html).

Bot libraries:

[Telegram lists many libraries to interact with the bot API](https://core.telegram.org/bots/samples#python), the most interesting are:

- [python-telegram-bot](#python-telegram-bot)
- [pyTelegramBotAPI](#pytelegrambotapi)
- [aiogram](#aiogram)

If there comes a moment when we have to create the messages ourselves, [telegram-text](https://telegram-text.alinsky.tech/api_reference) may be an interesting library to check.

[python-telegram-bot](https://github.com/python-telegram-bot/python-telegram-bot):

Pros:

- Popular: 23k stars, 4.9k forks
- Maintained: last commit 3 days ago
- They have a developers community to get help in [this telegram group](https://telegram.me/pythontelegrambotgroup)
- I like how they try to minimize third party dependencies, and how you can install the complements if you need them
- Built on top of asyncio
- Nice docs
- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api)
- Has many examples

Cons:

- Interface is a little verbose and complicated at a first look
- Only to be run in a single thread (not a problem)

References:

- [Package documentation](https://docs.python-telegram-bot.org/) is the technical reference for python-telegram-bot. It contains descriptions of all available classes, modules, methods and arguments as well as the changelog.
- [Wiki](https://github.com/python-telegram-bot/python-telegram-bot/wiki/) is home to number of more elaborate introductions of the different features of python-telegram-bot and other useful resources that go beyond the technical documentation.
- [Examples](https://docs.python-telegram-bot.org/examples.html) section contains several examples that showcase the different features of both the Bot API and python-telegram-bot
- [Source](https://github.com/python-telegram-bot/python-telegram-bot)

[pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI):

Pros:

- Popular: 7.1k stars, 1.8k forks
- Maintained: last commit 3 weeks ago
- Both sync and async
- Nicer interface with decorators and simpler setup
- [They have an example on how to split long messages](https://github.com/eternnoir/pyTelegramBotAPI#sending-large-text-messages)
- [Nice docs on how to test](https://github.com/eternnoir/pyTelegramBotAPI#testing)
- They have a developers community to get help in [this telegram group](https://telegram.me/joinchat/Bn4ixj84FIZVkwhk2jag6A)
- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api)
- Has examples

Cons:

- Uses lambdas inside the decorators, I don't know why it does it.
- The docs are not as throughout as `python-telegram-bot` one.

References:

- [Documentation](https://pytba.readthedocs.io/en/latest/index.html)
- [Source](https://github.com/eternnoir/pyTelegramBotAPI)
- [Async Examples](https://github.com/eternnoir/pyTelegramBotAPI/tree/master/examples/asynchronous_telebot)

[aiogram](https://github.com/aiogram/aiogram):

Pros:

- Popular: 3.8k stars, 717k forks
- Maintained: last commit 4 days ago
- Async support
- They have a developers community to get help in [this telegram group](https://t.me/aiogram)
- Has type hints
- Cleaner interface than `python-telegram-bot`
- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api)
- Has examples

Cons:

- Less popular than `python-telegram-bot`
- Docs are written at a developer level, difficult initial barrier to understand how to use it.

References:

- [Documentation](https://docs.aiogram.dev/en/dev-3.x/)
- [Source](https://github.com/aiogram/aiogram)
- [Examples](https://github.com/aiogram/aiogram/tree/dev-3.x/examples)

Conclusion:

Even if `python-telegram-bot` is the most popular and with the best docs, I prefer one of the others due to the easier interface. `aiogram`s documentation is kind of crap, and as it's the first time I make a bot I'd rather have somewhere good to look at.

So I'd say to go first with `pyTelegramBotAPI` and if it doesn't go well, fall back to `python-telegram-bot`.

feat(rocketchat): Introduce Rocketchat integrations

Rocket.Chat supports webhooks to integrate tools and services you like into the platform. Webhooks are simple event notifications via HTTP POST. This way, any webhook application can post a message to a Rocket.Chat instance and much more.

With scripts, you can point any webhook to Rocket.Chat and process the requests to print customized messages, define the username and avatar of the user of the messages and change the channel for sending messages, or you can cancel the request to prevent undesired messages.

Available integrations:

- Incoming Webhook: Let an external service send a request to Rocket.Chat to be processed.
- Outgoing Webhook: Let Rocket.Chat trigger and optionally send a request to an external service and process the response.

By default, a webhook is designed to post messages only. The message is part of a JSON structure, which has the same format as that of a .

[Incoming webhook script](https://docs.rocket.chat/use-rocket.chat/workspace-administration/integrations#incoming-webhook-script):

To create a new incoming webhook:

- Navigate to Administration > Workspace > Integrations.
- Click +New at the top right corner.
- Switch to the Incoming tab.
- Turn on the Enabled toggle.
- Name: Enter a name for your webhook. The name is optional; however, providing a name to manage your integrations easily is advisable.
- Post to Channel: Select the channel (or user) where you prefer to receive the alerts. It is possible to override messages.
- Post as: Choose the username that this integration posts as. The user must already exist.
- Alias: Optionally enter a nickname that appears before the username in messages.
- Avatar URL: Enter a link to an image as the avatar URL if you have one. The avatar URL overrides the default avatar.
- Emoji: Enter an emoji optionally to use the emoji as the avatar. [Check the emoji cheat sheet](https://github.com/ikatyang/emoji-cheat-sheet/blob/master/README.md#computer)
- Turn on the Script Enabled toggle.
- Paste your script inside the Script field (check below for a sample script)
- Save the integration.
- Use the generated Webhook URL to post messages to Rocket.Chat.

The Rocket.Chat integration script should be written in ES2015 / ECMAScript 6. The script requires a global class named Script, which is instantiated only once during the first execution and kept in memory. This class contains a method called `process_incoming_request`, which is called by your server each time it receives a new request. The `process_incoming_request` method takes an object as a parameter with the request property and returns an object with a content property containing a valid Rocket.Chat message, or an object with an error property, which is returned as the response to the request in JSON format with a Code 400 status.

A valid Rocket.Chat message must contain a text field that serves as the body of the message. If you redirect the message to a channel other than the one indicated by the webhook token, you can specify a channel field that accepts room id or, if prefixed with "#" or "@", channel name or user, respectively.

You can use the console methods to log information to help debug your script. More information about the console can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/Console/log).
. To view the logs, navigate to Administration > Workspace > View Logs.

```
/* exported Script */
/* globals console, _, s */

/** Global Helpers
 *
 * console - A normal console instance
 * _       - An underscore instance
 * s       - An underscore string instance
 */

class Script {
  /**
   * @params {object} request
   */
  process_incoming_request({ request }) {
    // request.url.hash
    // request.url.search
    // request.url.query
    // request.url.pathname
    // request.url.path
    // request.url_raw
    // request.url_params
    // request.headers
    // request.user._id
    // request.user.name
    // request.user.username
    // request.content_raw
    // request.content

    // console is a global helper to improve debug
    console.log(request.content);

    return {
      content:{
        text: request.content.text,
        icon_emoji: request.content.icon_emoji,
        channel: request.content.channel,
        // "attachments": [{
        //   "color": "#FF0000",
        //   "author_name": "Rocket.Cat",
        //   "author_link": "https://open.rocket.chat/direct/rocket.cat",
        //   "author_icon": "https://open.rocket.chat/avatar/rocket.cat.jpg",
        //   "title": "Rocket.Chat",
        //   "title_link": "https://rocket.chat",
        //   "text": "Rocket.Chat, the best open source chat",
        //   "fields": [{
        //     "title": "Priority",
        //     "value": "High",
        //     "short": false
        //   }],
        //   "image_url": "https://rocket.chat/images/mockup.png",
        //   "thumb_url": "https://rocket.chat/images/mockup.png"
        // }]
       }
    };

    // return {
    //   error: {
    //     success: false,
    //     message: 'Error example'
    //   }
    // };
  }
}
```

To test if your integration works, use curl to make a POST request to the generated webhook URL.

```bash
curl -X POST \
  -H 'Content-Type: application/json' \
  --data '{
      "icon_emoji": ":smirk:",
      "text": "Example message"
  }' \
  https://your-webhook-url
```

If you want to send the message to another channel or user use the `channel` argument with `@user` or `#channel`. Keep in mind that the user of the integration needs to be part of those channels if they are private.

```bash
curl -X POST \
  -H 'Content-Type: application/json' \
  --data '{
      "icon_emoji": ":smirk:",
      "channel": "#notifications",
      "text": "Example message"
  }' \
  https://your-webhook-url
```

If you want to do more complex things uncomment the part of the attachments.

feat(siem): Add Wazuh SIEM

[Wazuh](https://wazuh.com/)

feat(tails): Add interesting operations on tails

- [Upgrading a tails USB](https://tails.net/upgrade/tails/index.en.html)
- [Change the window manager](https://www.reddit.com/r/tails/comments/qzruhv/changing_window_manager/): Don't do it, they say it it will break Tails although I don't understand why

feat(vim#Email inside nvim): Email inside nvim

The best looking one is himalaya

- [Home](https://pimalaya.org/himalaya/index.html)
- [Nvim plugin](https://git.sr.ht/%7Esoywod/himalaya-vim)
- [Source](https://github.com/soywod/himalaya)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests