Gathering detailed insights and metrics for @acmedinotech/docproc
Gathering detailed insights and metrics for @acmedinotech/docproc
Gathering detailed insights and metrics for @acmedinotech/docproc
Gathering detailed insights and metrics for @acmedinotech/docproc
a fully extensible lexer-parser document processor with [enhanced] markdown support
npm install @acmedinotech/docproc
Typescript
Module System
Node Version
NPM Version
TypeScript (98.89%)
JavaScript (0.71%)
Shell (0.4%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
GPL-3.0 License
29 Commits
1 Watchers
3 Branches
1 Contributors
Updated on Jan 15, 2021
Latest Version
0.8.14
Package Id
@acmedinotech/docproc@0.8.14
Unpacked Size
861.78 kB
Size
390.39 kB
File Count
75
NPM Version
6.14.8
Node Version
12.20.0
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
An extensible document processor, suitable for human-friendly markup. Take it for a drive with your Markdown document of choice:
docproc path/to/your/file
First, let's talk document structure. Human-readable docs are linear, and they're typically organized in groups (blocks). The blocks themselves contain inline data or sub-blocks.
## html blocks at different levels
<html>
<div><b>bold</b></div>
</html>
## markdown
> blockquote **bold**
normal paragraph
The basic approach to all solid document processors is that they use a lexer-parser pattern to break the doc down into its smallest part then sequentially put them back together (in our case, as blocks with inline text).
docproc isn't any different there. What docproc aims to do is create a pattern for configuring lexeme detection and block/inline handling. Once you get a sense for how these pieces fit it should make writing your own processor easy.
docproc makes no assumption about what you're trying to process, but it does come with a Markdown (CommonMark) plugin and DinoMark plugin, which enhances CommonMark with more dynamic processing capabilities.
Let's use the following snippet of Markdown as our reference:
1> **blockquote** 2 3paragraph _**bold italic**_
To start, we need to specify the following lexemes:
>
(space)**
_
\\n
Anything that isn't explicitly identified is grouped together and emitted as their own lexemes.
We'll also need to build two block handlers:
blockquoteHandler
will only accept lines beginning with >
. If there are 2 consecutive newlines, the blockquote handler is done.paragraphHandler
accepts anything. Like blockquote, it also terminates after 2 consecutive newlines.Each instance of a block has its own handler instance.
Finally, we'll need to build two inline handlers:
boldHandler
starts and stops **
and allows embedded formattingitalicHandler
starts and stops _
and allows embedded formattingLet's trace how each token changes the state of the parser, starting at the block level:
>
blockquoteHandler
can accept and is set as current handler
, **
, blockquote
, **
blockquoteHandler
\\n
, \\n
paragraph
paragraphHandler
can accept and is set as current handler_
, **
, bold
,
, italic
, **
, _
paragraphHandler
Pretty simple so far. Now let's look within the block and see what happens with the inline tokens. I'll use the paragraph handler:
_
_
, but since it allows embedding other formatting,
it'll first defer the tokens to specific handlers if they exist[italicHandler]
**
[italicHandler, boldHandler]
bold
,
, italic
boldHandler
**
boldHandler
is popped[italicHandler]
_
italicHandler
is popped[]
When you turn the document into a string, you get all the pieces back, assembled from fragments of HTML returned from the different handlers.
That's basically it! You can see it all put together in readme.example.ts
Take a deeper dive:
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
Found 0/27 approved changesets -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
27 existing vulnerabilities detected
Details
Score
Last Scanned on 2025-06-30
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More