WE CONTINUE THE DEVELOPMENT AT go-enry/go-enry. This repository is abandoned, and no further updates will be done on the code base, nor issue/prs will be answered or attended.
Programming language detector and toolbox to ignore binary or vendored files. enry, started as a port to Go of the original linguist Ruby library, that has an improved 2x performance.
The recommended way to install the enry
command-line tool is to either
download a release or run:
(cd "$(mktemp -d)" && go mod init enry && go get github.com/src-d/enry/v2/cmd/enry)
enry CLI accepts similar flags (--breakdown/--json
) and produce an output, similar to linguist:
$ enry
97.71% Go
1.60% C
0.31% Shell
0.22% Java
0.07% Ruby
0.05% Makefile
0.04% Scala
0.01% Gnuplot
Note that enry's CLI does not need an actual git repository to work, which is an intentional difference from linguist.
enry is also available as a native Go library with FFI bindings for multiple programming languages.
In a Go module,
import enry
to the module by running:
go get github.com/src-d/enry/v2
The rest of the examples will assume you have either done this or fetched the
library into your GOPATH
.
// The examples here and below assume you have imported the library.
import "github.com/src-d/enry/v2"
lang, safe := enry.GetLanguageByExtension("foo.go")
fmt.Println(lang, safe)
// result: Go true
lang, safe := enry.GetLanguageByContent("foo.m", []byte("<matlab-code>"))
fmt.Println(lang, safe)
// result: Matlab true
lang, safe := enry.GetLanguageByContent("bar.m", []byte("<objective-c-code>"))
fmt.Println(lang, safe)
// result: Objective-C true
// all strategies together
lang := enry.GetLanguage("foo.cpp", []byte("<cpp-code>"))
// result: C++ true
Note that the returned boolean value safe
is true
if there is only one possible language detected.
To get a list of all possible languages for a given file, there is a plural version of the same API.
langs := enry.GetLanguages("foo.h", []byte("<cpp-code>"))
// result: []string{"C", "C++", "Objective-C}
langs := enry.GetLanguagesByExtension("foo.asc", []byte("<content>"), nil)
// result: []string{"AGS Script", "AsciiDoc", "Public Key"}
langs := enry.GetLanguagesByFilename("Gemfile", []byte("<content>"), []string{})
// result: []string{"Ruby"}
Generated Java bindings using a C shared library and JNI are available under java
.
A library is published on Maven as tech.sourced:enry-java for macOS and linux platforms. Windows support is planned under src-d/enry#150.
Generated Python bindings using a C shared library and cffi are WIP under src-d/enry#154.
A library is going to be published on pypi as enry for macOS and linux platforms. Windows support is planned under src-d/enry#150.
The enry
library is based on the data from github/linguist
version v7.5.1.
As opposed to linguist, enry
CLI tool does not require a full Git repository in the filesystem in order to report languages.
Parsing linguist/samples the following enry
results are different from linguist:
-
Heuristics for ".es" extension in JavaScript could not be parsed, due to unsupported backreference in RE2 regexp engine.
-
Heuristics for ".rno" extension in RUNOFF could not be parsed, due to unsupported lookahead in RE2 regexp engine.
-
As of Linguist v5.3.2 it is using flex-based scanner in C for tokenization. Enry still uses extract_token regex-based algorithm. See #193.
-
Bayesian classifier can't distinguish "SQL" from "PLpgSQL. See #194.
-
Detection of generated files is not supported yet. (Thus they are not excluded from CLI output). See #213.
-
XML detection strategy is not implemented. See #192.
-
Overriding languages and types though
.gitattributes
is not yet supported. See #18. -
enry
CLI output does NOT exclude.gitignore
ed files and git submodules, as linguist does
In all the cases above that have an issue number - we plan to update enry to match Linguist behavior.
Enry's language detection has been compared with Linguist's on linguist/samples.
We got these results:
The histogram shows the number of files (y-axis) per time interval bucket (x-axis). Most of the files were detected faster by enry.
There are several cases where enry is slower than linguist due to Go regexp engine being slower than Ruby's on, wich is based on oniguruma library, written in C.
See instructions for running enry with oniguruma.
In the movie My Fair Lady, Professor Henry Higgins is a linguist who at the very beginning of the movie enjoys guessing the origin of people based on their accent.
"Enry Iggins" is how Eliza Doolittle, pronounces the name of the Professor.
To build enry's CLI run:
make build
this will generate a binary in the project's root directory called enry
.
To run the tests use:
make test
enry re-uses parts of the original github/linguist to generate internal data structures. In order to update to the latest release of linguist do:
$ git clone https://github.com/github/linguist.git .linguist
$ cd .linguist; git checkout <release-tag>; cd ..
# put the new release's commit sha in the generator_test.go (to re-generate .gold test fixtures)
# https://github.com/src-d/enry/blob/13d3d66d37a87f23a013246a1b0678c9ee3d524b/internal/code-generator/generator/generator_test.go#L18
$ make code-generate
To stay in sync, enry needs to be updated when a new release of the linguist includes changes to any of the following files:
There is no automation for detecting the changes in the linguist project, so this process above has to be done manually from time to time.
When submitting a pull request syncing up to a new release, please make sure it only contains the changes in the generated files (in data subdirectory).
Separating all the necessary "manual" code changes to a different PR that includes some background description and an update to the documentation on "divergences from linguist" is very much appreciated as it simplifies the maintenance (review/release notes/etc).
Running a benchmark & faster regexp engine
All benchmark scripts are in benchmarks directory.
As benchmarks depend on Ruby and Github-Linguist gem make sure you have:
- Ruby (e.g using
rbenv
),bundler
installed - Docker
- native dependencies installed
- Build the gem
cd .linguist && bundle install && rake build_gem && cd -
- Install it
gem install --no-rdoc --no-ri --local .linguist/github-linguist-*.gem
To run quicker benchmarks you can either:
make benchmarks
to get average times for the main detection function and strategies for the whole samples set or:
make benchmarks-samples
if you want to see measures per sample file.
If you want to reproduce the same benchmarks as reported above:
- Make sure all dependencies are installed
- Install gnuplot (in order to plot the histogram)
- Run
ENRY_TEST_REPO="$PWD/.linguist" benchmarks/run.sh
(takes ~15h)
It will run the benchmarks for enry and linguist, parse the output, create csv files and plot the histogram.
Oniguruma is CRuby's regular expression engine. It is very fast and performs better than the one built into Go runtime. enry supports swapping between those two engines thanks to rubex project. The typical overall speedup from using Oniguruma is 1.5-2x. However, it requires CGo and the external shared library. On macOS with Homebrew, it is:
brew install oniguruma
On Ubuntu, it is
sudo apt install libonig-dev
To build enry with Oniguruma regexps use the oniguruma
build tag
go get -v -t --tags oniguruma ./...
and then rebuild the project.
Apache License, Version 2.0. See LICENSE