Gathering detailed insights and metrics for suffix-thumb
Gathering detailed insights and metrics for suffix-thumb
Gathering detailed insights and metrics for suffix-thumb
Gathering detailed insights and metrics for suffix-thumb
npm install suffix-thumb
Typescript
Module System
Node Version
NPM Version
99.8
Supply Chain
99.6
Quality
75.9
Maintenance
100
Vulnerability
100
License
JavaScript (54.05%)
HTML (45.95%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
7 Stars
126 Commits
2 Watchers
5 Branches
1 Contributors
Updated on Oct 31, 2024
Latest Version
5.0.2
Package Id
suffix-thumb@5.0.2
Unpacked Size
47.12 kB
Size
12.26 kB
File Count
23
NPM Version
8.15.0
Node Version
18.7.0
Published on
Feb 03, 2023
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
discover the minimal rules for mapping two sets of words to one another, according to changes in their suffix.
It was built for learning rules about verb conjugations, but in a way, it is just a generic compression algorithm.
The assumption is that a word's suffix is the most-often changed part of a word.
1import { learn, convert } from 'suffix-thumb' 2 3let pairs = [ 4 ['walk', 'walked'], 5 ['talk', 'talked'], 6 ['go', 'went'], 7] 8let model = learn(pairs) 9/* { 10 rules: { k: [ [ 'alk', 'alked' ] ] }, 11 exceptions: { go: 'went' }, 12}*/ 13 14let out = convert('walk', model) 15// 'walked'
you can pass-in options:
1let opts={ 2 threshold:80, //how sloppy our initial rules can be 3 min:0, //rule must satisfy # of pairs 4 reverse:true, //compute backward transformation, too 5} 6let model = learn(pairs, opts)
the model also works transforming the words the other way:
1import { learn, reverse, convert } from 'suffix-thumb' 2 3let pairs = [ 4 ['walk', 'walked'], 5 ['talk', 'talked'], 6 ['go', 'went'], 7] 8let model = learn(pairs) 9let rev = reverse(model) 10let out = convert('walked', rev) 11// 'walk'
by default, the model ensures all two-way transformation - if you only require 1-way, you can do:
1learn(pairs, {reverse: false})
you can expect the model to be 5% smaller or so - not much.
by default, the model is small, but remains human-readable (and human-editable). We can compress it further, turning it into a snowball inscrutible characters:
1import { learn, compress, uncompress, convert } from 'suffix-thumb' 2 3let pairs = [ 4 ['walk', 'walked'], 5 ['talk', 'talked'], 6 ['go', 'went'], 7] 8let model = learn(pairs) 9// shrink it 10model = compress(shrink) 11// {rules:'LSKs3H2-LNL.S3DH'} 12// pop it back 13model = uncompress(model) 14let out = convert('walk', model) 15// 'walked' 16
The models must be uncompressed before they are used, or reversed.
sometimes you can accidentally send in an impossible set of transformations. This library quietly ignores duplicates, by default.
You can use {verbose:true}
to log warnings about this, or validate your input manually:
1import { validate } from 'suffix-thumb' 2let pairs = [ 3 ['left', 'right'], 4 ['left', 'right-two'], 5 ['ok', 'right'], 6] 7pairs = validate(pairs) //remove dupes (on both sides)
If you are just doing one-way transformation, and not reverse, you may want to allow duplicates on the right side:
1let pairs = [ 2 ['left', 'right'], 3 ['ok', 'right'], 4] 5let model = learn(pairs, {reverse: false}) 6let out = convert('ok', model) 7// 'right'
For each word-pair, it generates all n-suffixes of the left-side, and n-suffixes of the right-side.
any good correlations between the two suffix pairs begins to pop out. Exceptions to these rules are remembered. It then exhaustively reduces any redundancies in these rules.
There are some compromises, magic-numbers, and opinionated decisions - in-order to allow productive, but imperfect rules.
The library drops case-information - and numbers and some characters1 will not compress properly.
There may be wordlists with few helpful patterns. Conjugation datasets in English and French tend to get ~85% filesize compression.
MIT
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
7 existing vulnerabilities detected
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
Found 0/4 approved changesets -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More