Skip to content

Latest commit

 

History

History
291 lines (202 loc) · 11.7 KB

README.md

File metadata and controls

291 lines (202 loc) · 11.7 KB

Algorithm Exercises (Typescript)

Node.js CI ESLint Markdown Lint YAML lint

GitHub GitHub language count GitHub top language CodeFactor codecov FOSSA Status

Quality Gate Status Coverage Bugs Code Smells Duplicated Lines (%)

TL;DR

Algorithms Exercises solved in Typescript, running with Jest testing suite. Developed with TDD.

Typescript Node.js Jest Docker

Go to Install and run

What is this?

This repository is part of a series that share and solve the same objectives, with the difference that each one is based on a different software ecosystem, depending on the chosen programming language:

Objetives

Functional

  • For academic purposes, it is an backup of some algorithm exercises (with their solutions), proposed by various sources: leetcode, hackerrank, projecteuler, ...

  • The solutions must be written on "vanilla code", that is, avoiding as much as possible the use of external libraries (in runtime).

  • Adoption of methodology and good practices. Each exercise is implemented as a unit test set, using TDD (Test-driven Development) and Clean Code ideas.

Technical

Foundation of a project that supports:

  • Explicit typing when the language supports it, even when it is not mandatory.
  • Static Code Analysis (Lint) of code, scripts and documentation.
  • Uniform Code Styling.
  • Unit Test framework.
  • Coverge collection. High coverage percentage. Equal or close to 100%.
  • Pipeline (Github Actions). Each command must take care of its return status code.
  • Docker-based workflow to replicate behavior in any environment.
  • Other tools to support the reinforcement of software development good practices.

Install and Run

You can run tests in the following ways:

⭐️: Prefered way.

Install and Run directly

Using a NodeJS runtime in your SO. You must install dependencies:

npm install

Every problem is a function with unit test.

Unit test has test cases and input data to solve the problem.

Run all tests:

npm run test

Test run with alternative behaviors

You can change test running behaviour using some environment variables as follows:

Variable Values Default
LOG_LEVEL debug, warning, error, info info
BRUTEFORCE true, false false
  • LOG_LEVEL: change verbosity level in outputs.
  • BRUTEFORCE: enable or disable running large tests. (long time, large amount of data, high memory consumition).

Examples running tests with alternative behaviors

Run tests with debug outputs:

LOG_LEVEL=debug npm run test

Run brute-force tests with debug outputs:

BRUTEFORCE=true LOG_LEVEL=debug npm run test

Install and Run using make

make tool is used to standardizes the commands for the same tasks across each sibling repository.

Run tests (libraries are installed as dependency task in make):

make test

Run tests with debug outputs:

make test -e LOG_LEVEL=debug

Run brute-force tests with debug outputs:

make test -e BRUTEFORCE=true -e LOG_LEVEL=debug

Alternative way, use environment variables as prefix:

BRUTEFORCE=true LOG_LEVEL=debug make test

Install and Running with Docker 🐳

Build an image of the test stage. Then creates and ephemeral container an run tests.

BRUTEFORCE and LOG_LEVEL environment variables are passing from current environment using docker-compose.

docker-compose --profile testing run --rm algorithm-exercises-ts-test

To change behavior using environment variables, you can pass to containers in the following ways:

From host using docker-compose (compose.yaml) mechanism:

BRUTEFORCE=true LOG_LEVEL=debug docker-compose --profile testing run --rm algorithm-exercises-ts-test

Overriding docker CMD, as parameter of make "-e":

docker-compose --profile testing run --rm algorithm-exercises-ts-test make test -e LOG_LEVEL=DEBUG -e BRUTEFORCE=true

Install and Running with Docker 🐳 using make

make compose/build
make compose/test

To pass environment variables you can use docker-compose or overriding CMD and passing to make as "-e" argument.

Passing environment variables using docker-compose (compose.yaml mechanism):

BRUTEFORCE=true LOG_LEVEL=debug make compose/test

Development workflow using Docker / docker-compose

Running container with development target. Designed for development workflow on top of this image. All source application is mounted as a volume in /app directory. Dependencies should be installed to run so, you must install dependencies before run (or after a dependency add/change).

# Build development target image
docker-compose build --compress algorithm-exercises-ts-dev
# run ephemeral container to install dependencies using docker runtime
# and store them in host directory (by bind-mount volume)
docker-compose run --rm algorithm-exercises-ts-dev npm install --verbose
# Run ephemeral container and override command to run test
docker-compose run --rm algorithm-exercises-ts-dev npm run test

Run complete workflow (Docker + make)

Following command simulates a standarized pipeline across environments, using docker-compose and make.

make compose/build && make compose/lint && make compose/test && make compose/run
  • Build all Docker stages and tag relevant images.
  • Run static analysis (lint) checks
  • Run unit tests
  • Run a "final" production ready image as a final container. Final "production" image just shows a minimal "production ready" build (with no tests).

About development

Developed with runtime:

node --version
v22.2.0

Algorithm excersices sources

  • Leetcode online platform for coding interview preparation.
  • HackerRank competitive programming challenges for both consumers and businesses.
  • Project Euler a series of computational problems intended to be solved with computer programs.

Use these answers to learn some tip and tricks for algorithms tests.

Disclaimer. Why I publish solutions?

As Project Euler says:

https://projecteuler.net/about#publish

I learned so much solving problem XXX, so is it okay to publish my solution elsewhere?
It appears that you have answered your own question. There is nothing quite like that "Aha!" moment when you finally beat a problem which you have been working on for some time. It is often through the best of intentions in wishing to share our insights so that others can enjoy that moment too. Sadly, that will rarely be the case for your readers. Real learning is an active process and seeing how it is done is a long way from experiencing that epiphany of discovery. Please do not deny others what you have so richly valued yourself.

However, the rule about sharing solutions outside of Project Euler does not apply to the first one-hundred problems, as long as any discussion clearly aims to instruct methods, not just provide answers, and does not directly threaten to undermine the enjoyment of solving later problems. Problems 1 to 100 provide a wealth of helpful introductory teaching material and if you are able to respect our requirements, then we give permission for those problems and their solutions to be discussed elsewhere.

If you have better answers or optimal solutions, fork and PR-me

Enjoy 😁 !

Status

License

FOSSA Status

Coverage

Coverage