Gathering detailed insights and metrics for prosemirror-markdown
Gathering detailed insights and metrics for prosemirror-markdown
Gathering detailed insights and metrics for prosemirror-markdown
Gathering detailed insights and metrics for prosemirror-markdown
@equinor/fusion-wc-markdown
Markdown editor created from ProseMirror markdown
prosemirror-trailing-node
A trailing node plugin for the prosemirror editor.
prosemirror-commands
Editing commands for ProseMirror
prosemirror-schema-list
List-related schema elements and commands for ProseMirror
npm install prosemirror-markdown
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
344 Stars
1,003 Commits
81 Forks
10 Watching
1 Branches
37 Contributors
Updated on 25 Nov 2024
TypeScript (100%)
Cumulative downloads
Total Downloads
Last day
-1.8%
250,545
Compared to previous day
Last week
2.2%
1,269,878
Compared to previous week
Last month
4.7%
5,293,591
Compared to previous month
Last year
139.5%
44,804,348
Compared to previous year
3
3
[ WEBSITE | ISSUES | FORUM | GITTER ]
This is a (non-core) module for ProseMirror. ProseMirror is a well-behaved rich semantic content editor based on contentEditable, with support for collaborative editing and custom document schemas.
This module implements a ProseMirror schema that corresponds to the document schema used by CommonMark, and a parser and serializer to convert between ProseMirror documents in that schema and CommonMark/Markdown text.
This code is released under an MIT license. There's a forum for general discussion and support requests, and the Github bug tracker is the place to report issues.
We aim to be an inclusive, welcoming community. To make that explicit, we have a code of conduct that applies to communication around the project.
schema: Schema<"doc" | "paragraph" | "blockquote" | "horizontal_rule" | "heading" | "code_block" | "ordered_list" | "bullet_list" | "list_item" | "text" | "image" | "hard_break", "em" | "strong" | "link" | "code">
Document schema for the data model used by CommonMark.
class
MarkdownParserA configuration of a Markdown parser. Such a parser uses markdown-it to tokenize a file, and then runs the custom rules it is given over the tokens to create a ProseMirror document tree.
new MarkdownParser(schema: Schema, tokenizer: any, tokens: Object<ParseSpec>)
Create a parser with the given configuration. You can configure
the markdown-it parser to parse the dialect you want, and provide
a description of the ProseMirror entities those tokens map to in
the tokens
object, which maps token names to descriptions of
what to do with them. Such a description is an object, and may
have the following properties:
schema: Schema
The parser's document schema.
tokenizer: any
This parser's markdown-it tokenizer.
tokens: Object<ParseSpec>
The value of the tokens
object used to construct this
parser. Can be useful to copy and modify to base other parsers
on.
parse(text: string) → any
Parse a string as CommonMark markup, and create a ProseMirror document as prescribed by this parser's rules.
interface
ParseSpecObject type used to specify how Markdown tokens should be parsed.
node?: string
This token maps to a single node, whose type can be looked up
in the schema under the given name. Exactly one of node
,
block
, or mark
must be set.
block?: string
This token (unless noCloseToken
is true) comes in _open
and _close
variants (which are appended to the base token
name provides a the object property), and wraps a block of
content. The block should be wrapped in a node of the type
named to by the property's value. If the token does not have
_open
or _close
, use the noCloseToken
option.
mark?: string
This token (again, unless noCloseToken
is true) also comes
in _open
and _close
variants, but should add a mark
(named by the value) to its content, rather than wrapping it
in a node.
attrs?: Attrs
Attributes for the node or mark. When getAttrs
is provided,
it takes precedence.
getAttrs?: fn(token: any, tokenStream: any[], index: number) → Attrs
A function used to compute the attributes for the node or mark that takes a markdown-it token and returns an attribute object.
noCloseToken?: boolean
Indicates that the markdown-it
token has
no _open
or _close
for the nodes. This defaults to true
for code_inline
, code_block
and fence
.
ignore?: boolean
When true, ignore content for the matched token.
defaultMarkdownParser: MarkdownParser
A parser parsing unextended CommonMark, without inline HTML, and producing a document in the basic schema.
class
MarkdownSerializerA specification for serializing a ProseMirror document as Markdown/CommonMark text.
new MarkdownSerializer(nodes: Object<fn(state: MarkdownSerializerState, node: Node, parent: Node, index: number)>, marks: Object<Object>, options?: Object = {})
Construct a serializer with the given configuration. The nodes
object should map node names in a given schema to function that
take a serializer state and such a node, and serialize the node.
options
escapeExtraCharacters?: RegExp
Extra characters can be added for escaping. This is passed directly to String.replace(), and the matching characters are preceded by a backslash.
nodes: Object<fn(state: MarkdownSerializerState, node: Node, parent: Node, index: number)>
The node serializer functions for this serializer.
marks: Object<Object>
The mark serializer info.
options: Object
escapeExtraCharacters?: RegExp
Extra characters can be added for escaping. This is passed directly to String.replace(), and the matching characters are preceded by a backslash.
serialize(content: Node, options?: Object = {}) → string
Serialize the content of the given node to CommonMark.
options
tightLists?: boolean
Whether to render lists in a tight style. This can be overridden on a node level by specifying a tight attribute on the node. Defaults to false.
class
MarkdownSerializerStateThis is an object used to track state and expose
methods related to markdown serialization. Instances are passed to
node and mark serialization methods (see toMarkdown
).
options: {tightLists?: boolean, escapeExtraCharacters?: RegExp}
The options passed to the serializer.
wrapBlock(delim: string, firstDelim: string, node: Node, f: fn())
Render a block, prefixing each line with delim
, and the first
line in firstDelim
. node
should be the node that is closed at
the end of the block, and f
is a function that renders the
content of the block.
ensureNewLine()
Ensure the current content ends with a newline.
write(content?: string)
Prepare the state for writing output (closing closed paragraphs, adding delimiters, and so on), and then optionally add content (unescaped) to the output.
closeBlock(node: Node)
Close the block for the given node.
text(text: string, escape?: boolean = true)
Add the given text to the document. When escape is not false
,
it will be escaped.
render(node: Node, parent: Node, index: number)
Render the given node as a block.
renderContent(parent: Node)
Render the contents of parent
as block nodes.
renderInline(parent: Node)
Render the contents of parent
as inline content.
renderList(node: Node, delim: string, firstDelim: fn(index: number) → string)
Render a node's content as a list. delim
should be the extra
indentation added to all lines except the first in an item,
firstDelim
is a function going from an item index to a
delimiter for the first line of the item.
esc(str: string, startOfLine?: boolean = false) → string
Escape the given string so that it can safely appear in Markdown
content. If startOfLine
is true, also escape characters that
have special meaning only at the start of the line.
repeat(str: string, n: number) → string
Repeat the given string n
times.
markString(mark: Mark, open: boolean, parent: Node, index: number) → string
Get the markdown string for a given opening or closing mark.
getEnclosingWhitespace(text: string) → {leading?: string, trailing?: string}
Get leading and trailing whitespace from a string. Values of leading or trailing property of the return object will be undefined if there is no match.
defaultMarkdownSerializer: MarkdownSerializer
A serializer for the basic schema.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
0 existing vulnerabilities detected
Reason
license file detected
Details
Reason
2 commit(s) and 2 issue activity found in the last 90 days -- score normalized to 3
Reason
Found 7/30 approved changesets -- score normalized to 2
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More