Compare commits

...

41 Commits

Author SHA1 Message Date
Tim Van Baak f1f9ebe9a0
Merge pull request #4 from Jaculabilis/develop
Improved package and object model
2019-05-21 01:01:00 -07:00
Tim Van Baak e78787a5cf Documentation updates 2019-05-21 00:53:41 -07:00
Tim Van Baak 699dce47af Add missing addendum config check 2019-05-20 20:47:39 -07:00
Tim Van Baak 31f401400c Fix sort type in redirect 2019-05-20 18:47:01 -07:00
Tim Van Baak 9a15fee707 Add links to editor and full page in Session 2019-05-18 14:53:58 -07:00
Tim Van Baak d6d145e7c5 Fix tag order 2019-05-18 14:42:34 -07:00
Tim Van Baak 6d54390f51 De-compact css for readability 2019-05-18 14:42:13 -07:00
Tim Van Baak 2a1c376c44 Override newline for file writes 2019-05-18 14:19:26 -07:00
Tim Van Baak 309dc68127 Use different escaping replacement 2019-05-18 00:45:24 -07:00
Tim Van Baak 56766d3ad3 Fix ~ not being escaped 2019-05-18 00:38:22 -07:00
Tim Van Baak 172d1b2123 Formatting typo in rules page 2019-05-18 00:20:24 -07:00
Tim Van Baak b3875fc7da Statistics refactor and config hookup 2019-05-18 00:16:46 -07:00
Tim Van Baak 8773f6b58f Add title-turn uniqueness check 2019-04-23 23:56:59 -07:00
Tim Van Baak 18eba7c035 Move prev/next to the end of articles with addenda 2019-04-23 23:50:45 -07:00
Tim Van Baak 3ac0b2c738 Improve index sorting options 2019-04-23 16:20:23 -07:00
Tim Van Baak 1f702b5af4 Update config check to use default values dynamically 2019-04-22 14:56:47 -07:00
Tim Van Baak 95d2cddf17 Adjust paths for package org 2019-04-22 14:14:59 -07:00
Tim Van Baak 2243cae653 Make module more module-like 2019-04-22 13:47:16 -07:00
Tim Van Baak 014ff075c1 Config changes for the next round of updates 2019-04-22 13:46:52 -07:00
Tim Van Baak f709efefb8 Add WIP latex export code 2019-04-22 12:52:29 -07:00
Tim Van Baak 312af310e0 Update gitignore 2019-03-10 17:36:24 -07:00
Tim Van Baak c50676c37d Rewrite statistics calculations again 2018-12-07 08:58:41 -08:00
Tim Van Baak c9281b6450 Fix turn count miscounting addendum lengths 2018-11-29 00:27:54 -08:00
Tim Van Baak b15fcbe359 Add undercited articles statistic 2018-11-21 11:42:39 -08:00
Tim Van Baak 7070d460fc Optimize lambdas 2018-11-13 13:04:13 -08:00
Tim Van Baak aeb195e595 Add additional citation checks to editor 2018-11-13 13:03:55 -08:00
Tim Van Baak 9a8ad419a0 Rework printable page for new article model 2018-11-10 15:39:11 -08:00
Tim Van Baak 4ab9f1f1ea Lower margins on index headings 2018-11-07 23:06:45 -08:00
Tim Van Baak f9c1f02b37 Add missing not 2018-11-07 00:11:37 -08:00
Tim Van Baak 6f51bde2cc Fix phantom exclusion in bottom pageranks 2018-11-05 09:36:59 -08:00
Tim Van Baak c125bdbe69 Fix error in player pagerank statistics 2018-11-04 00:58:07 -07:00
Tim Van Baak e6293eab50 Filter out phantoms from bottom pagerank 2018-11-04 00:07:43 -07:00
Tim Van Baak 56421820bb Add pagerank bottom statistic 2018-11-04 00:02:48 -07:00
Tim Van Baak a133f2c865 Add by-turn word count 2018-11-03 21:00:39 -07:00
Tim Van Baak 706b29d202 Add turn number to editor download for addendums 2018-11-03 19:11:04 -07:00
Tim Van Baak ec953b6a99 Remove commented-out code 2018-11-03 19:07:52 -07:00
Tim Van Baak 8260b014f3 Add custom editor 2018-11-03 14:54:13 -07:00
Tim Van Baak cd0e3d895b Fix citation check for new citation semantics 2018-11-03 14:53:37 -07:00
Tim Van Baak 87df1b7480 Fix pagerank crashing on citationless articles 2018-11-02 00:50:34 -07:00
Tim Van Baak 7490cd6f7f Refactor citations and add addendum articles 2018-10-31 15:22:15 -07:00
Tim Van Baak 00cc4e9cfe Copy CSS during lexicon init 2018-10-30 10:17:54 -07:00
25 changed files with 1742 additions and 853 deletions

6
.gitignore vendored
View File

@ -102,3 +102,9 @@ ENV/
# Ignore directories in lexicon/
lexicon/*/
# Ignore vscode
.vscode
# Ignore a scratch directory
scratch/

View File

@ -8,19 +8,64 @@ To play Lexicon, all you really need is something for the scholars to write thei
To aid in playing Lexicon, Lexipython **does** provide the following:
* Specialized markdown parsing into formatted Lexicon entries.
* An editor with a live preview of the parsed result and a download button.
* HTML page generation and interlinking.
* Handy help pages for rules, session information, statistics, and more.
* Handy help pages for rules, markdown formatting, session information, and statistics.
Lexipython **does not** provide:
* Web hosting for the Lexicon pages. The Editor should have a plan for distributing new editions of the Lexicon.
* Programmatic article submission. The current version of Lexipython does not involve a persistent server process. Players must send their articles to the Editor themselves.
* Web hosting for the Lexicon pages. The Editor should have a plan for distributing new editions of the Lexicon, such as GitHub Pages.
* Checks for factual consistency between submitted articles. The Editor is responsible for ensuring scholarly rigor.
## Using Lexipython
To run a game of Lexicon with Lexipython, clone this repository out to a new folder:
To run a game of Lexicon with Lexipython, use [git](https://git-scm.com/) to clone this repository out to a new folder.
```
$ git clone https://github.com/Jaculabilis/Lexipython.git [name]
$ git clone https://github.com/Jaculabilis/Lexipython.git
```
Steps for setup:
1. [WIP]
Lexipython requires [Python 3](https://www.python.org/downloads/). It will run with only the Python 3 standard library installed, but pagerank statistics will be unavailable without `networkx` installed.
```
$ pip install --user networkx
```
When you have the necessary software installed, open a terminal in the Lexipython directory. You can view the usage of the program with
```
$ python lexipython -h
usage: lexipython [-h] [name] [command]
Lexipython is a Python application for playing the Lexicon RPG.
positional arguments:
name The name of the Lexicon to operate on
command The operation to perform on the Lexicon
optional arguments:
-h, --help show this help message and exit
Run lexipython.py without arguments to list the extant Lexicons.
Available commands:
init Create a Lexicon with the provided name
build Build the Lexicon, then exit
run Launch a persistent server managing the Lexicon
```
Your lexicons are stored in the `lexicon/` folder. Run `python lexipython` to see the status of all lexicons. Except I haven't implemented that yet. Ignore that bit. If you run `python lexipython [name]`, you'll get the status of the named lexicon. That also hasn't been implemented. Whoops!
To create a lexicon, run `python lexipython [name] init` with the name of the lexicon. A folder will be created in `lexicon/` with the given name and some default files will be copied in. You'll need to add a logo image to the folder and edit the config. As players submit articles, place the .txt files in `lexicon/[name]/src/`.
When you finish your initial edits to the config and whenever you want to update the generated HTML files, run `python lexipython [name] build`. Lexipython will regenerate the article pages under `lexicon/[name]/article/` as well as the contents, formatting, rules, session, and statistics pages, and the editor.
To publish the pages, simply copy the lexicon's folder to wherever you're hosting the static files. If you wish, you can leave out the `src/` directory and the status and cfg files. They're are not navigable from the public-facing pages.
The `run` command isn't implemented yet either, and to be honest that probably isn't how you're supposed to implement it in the first place. Ignore it for now.
## Configuring a lexicon
[`lexicon.cfg`](lexipython/resources/lexicon.cfg) contains comments explaining the various config options. `PROMPT` and `SESSION_PAGE` should be written as raw HTML, and will be inserted directly into the page. If you wish to use the Addendums rule explained in the main readme, set `ALLOW_ADDENDA` to `True`. If `SEARCHABLE_FILE` is defined, then the Session page will link to a file with all the articles on one page.
## Other notes
At the end of the build, Lexipython will check for players citing themselves. The program does not fault on these checks, because players may be writing articles as Ersatz Scrivener, or otherwise allowed to cite themselves. Watch out for any unexpected output here.

View File

@ -8,11 +8,11 @@ In Lexicon, each player takes on the role of a scholar. You are cranky, opiniona
## Basic Rules: What Everyone Should Know
1. Each Lexicon has a _topic statement_ that sets the tone for the game. It provides a starting point for shaping the developing world of the Lexicon. As it is a starting point, don't feel contrained to write only about the topics mentioned directly in it.
1. Each Lexicon has a **topic statement** that sets the tone for the game. It provides a starting point for shaping the developing world of the Lexicon. As it is a starting point, don't feel contrained to write only about the topics mentioned directly in it.
1. Articles are sorted under an _index_, a grouping of letters. An article is in an index if its first letter is in that group of letters. "The", "A", and "An" aren't counted in indexing. _Example: One of the indices is JKL. An article titled 'The Jabberwock' would index under JKL, not T's index._
1. Articles are sorted under an **index**, a grouping of letters. An article is in an index if its first letter is in that group of letters. "The", "A", and "An" aren't counted in indexing. _Example: Two indices are JKL and TUV. An article titled 'The Jabberwock' would index under JKL, not TUV._
1. Until the game is over, some of the articles will have been cited, but not yet written. These are called _phantom_ articles. A phantom article has a title, which is defined by the first citation to it, but no content.
1. Until the game is over, some of the articles will have been cited, but not yet written. These are called **phantom articles**. A phantom article has a title, which is defined by the first citation to it, but no content.
1. Generally, an index has a number of "slots" equal to the number of players. When an article is first written or cited, it takes up one slot in its corresponding index.
@ -26,7 +26,9 @@ In Lexicon, each player takes on the role of a scholar. You are cranky, opiniona
1. There are no hard and fast rules about length, but it is recommended that the Editor enforce a maximum word limit. In general, aiming for 200-300 words is ideal.
1. You must respect and not contradict the factual content of all written articles. You may introduce new facts that put things in a new light, provide alternative interpretations, or flesh out unexplained details in unexpected ways; but you must not _contradict_ what has been previously established as fact. Use the "yes, and" rule from improv acting: accept what your fellow scholars have written and add to it in new ways, rather than trying to undo their work. This rule includes facts that have been established in written articles about the topics of phantom articles.
1. You must respect and not contradict the factual content of all written articles. You may introduce new facts that put things in a new light, provide alternative interpretations, or flesh out unexplained details in unexpected ways; but you must not _contradict_ what has been previously established as fact. Use the "yes, and" rule from improv acting: accept what your fellow scholars have written and add to it in new ways, rather than trying to undo their work.
1. This rule includes facts that have been established in other, written articles about the topics of phantom articles. When you set out to write a phantom article, be sure to check what's been said about the topic already. Lexipython will list the articles that have cited your article.
1. Each article will cite other articles in the Lexicon.
@ -34,16 +36,18 @@ In Lexicon, each player takes on the role of a scholar. You are cranky, opiniona
1. As a corollary, you may not write phantom articles that you have cited. If you cite an article and then write it later, your former article now cites you, which is forbidden per the above.
1. On the first turn, there are no written articles. Your first article must cite _exactly two_ phantom articles.
1. On the first turn, there are no written articles. Your first article must cite **exactly two** phantom articles.
1. On subsequent turns, your article must cite _exactly two_ phantoms, but you can cite phantoms that already exist. Your article must also cite _at least one_ written article. You can cite more than one.
1. On subsequent turns, your article must cite **exactly two** phantoms, but you can cite phantoms that already exist. Your article must also cite **at least one** written article. You can cite more than one.
1. On the penultimate turn, you must cite _exactly one_ phantom article and _at least two_ written articles.
1. On the penultimate turn, you must cite **exactly one** phantom article and **at least two** written articles.
1. On the final turn, you must cite _at least three_ written articles.
1. On the final turn, you must cite **at least three** written articles.
1. As the game goes on, it may come to pass that a player must write an article in an index, but that index is full, and that player has already cited all the phantoms in it. When this happens, the player instead writes their article as **Ersatz Scrivener**, radical skeptic. Ersatz does not believe in the existence of whatever he is writing about, no matter how obvious it seems to others or how central it is in the developing history of the world. For Ersatz, all references, testimony, etc. with regard to its existence are tragic delusion at best or malicious lies at worst. Unlike the other scholars, Ersatz does not treat the research of his peers as fact, because he does not believe he has peers. Players writing articles as Ersatz are encouraged to lambast the amateur work of his misguided "collaborators".
1. Finally, the rules are always subject to the discretion of the Editor.
## Procedural Rules: Running the Game
### The Editor
@ -60,7 +64,7 @@ The player running the game is the Editor. The Editor should handle the followin
* **Topic statement.** The topic statement should be vague, but give the players some hooks to begin writing. Examples: "You are all revisionist scholars from the Paleotechnic Era arguing about how the Void Ghost Rebellion led to the overthrow of the cyber-gnostic theocracy and the establishment of the Third Republic"; "In the wake of the Quartile Reformation, you are scholars investigating the influence of Remigrationism on the Disquietists". What happened to the first two Republics or what Remigrationism is are left open for the players to determine.
* **Indices and turns.** In general, the Editor will decide on a number of turns and divide the alphabet into that many indices. Each player then takes one turn in each index. A game of 6 or 8 turns is suggested. _Example: An 8-turn game over the indices ABC/DEF/GHI/JKL/MNO/PQRS/TUV/QXYZ._ The Editor should determine how much time the players can devote to playing Lexicon and set a time limit on turns accordingly.
* **Indices and turns.** In general, the Editor will decide on a number of turns and divide the alphabet into that many indices. Each player then takes one turn in each index. A game of 6 or 8 turns is suggested. _Example: An 8-turn game over the indices ABC/DEF/GHI/JKL/MNO/PQRS/TUV/WXYZ._ The Editor should determine how much time the players can devote to playing Lexicon and set a time limit on turns accordingly.
* **Index assignments.** Each turn, the Editor should assign each player to an index. Unless players have a method of coordinating who is writing what article, it is suggested that the Editor always assign players to write in different indices. The easiest way to do this is to distribute players randomly across the indices for the first turn, then move them through the indices in order, wrapping around to the top from the bottom.
@ -74,7 +78,7 @@ How the game develops is entirely up to the players, and your group may have a d
* Even if articles don't get too long, having too many articles on one subject can lead to the same problem of writing on the topic becoming too hard to do consistently. Avoid having multiple articles about the same thing, and avoid having too many articles about different facets of one particular element of the world.
* Encyclopedias are written about things in the past. Players may, of course, want to mention how something in the past still affects the world in the present day. However, if players begin to write about purely contemporary things or events, the Lexicon shifts from an _encyclopedic_ work to a _narrative_ one. If that's what you want out of the game, go ahead and do so, but writing about an ongoing narrative insead of settled history introduce the additional complication of keeping abreast of the current state of the plot. It is more difficult for players to avoid contradiction when the facts are changing as they write.
* Encyclopedias are written about things in the past. Players may, of course, want to mention how something in the past still affects the world in the present day. However, if players begin to write about purely contemporary things or events, the Lexicon shifts from an _encyclopedic_ work to a _narrative_ one. If that's what you want out of the game, go ahead and do so, but writing about an ongoing narrative insead of settled history introduces the additional complication of keeping abreast of the current state of the plot. It is more difficult for players to avoid contradiction when the facts are changing as they write.
* Articles whose titles do not begin with a character in any index pattern are sorted to the "&c" index. This usually includes numbers and symbols. If the Editor wants to make purposive use of this, they can assign players to it as an index.
@ -82,7 +86,9 @@ How the game develops is entirely up to the players, and your group may have a d
The Editor is always free to alter the game procedures when it would make for a better game. The following are some known rule variations:
* **Follow the Phantoms:** Players make two phantom citations on the first turn. On subsequent turns, rather than choosing from phantoms and open slots in an assigned index, players must write an existing phantom. Until all slots are full, players must make one of their phantom citations to a new phantom article and one to an existing phantom.
* **Follow the Phantoms:** Players make phantom citations as normal on the first turn. On subsequent turns, rather than choosing from phantoms and open slots in an assigned index, players must write an existing phantom. Until all slots are full, players must make one of their phantom citations to a new phantom article and one to an existing phantom.
* **Addendums:** In addition to writing new and phantom articles, players can write articles with the same title as an already-written article. The content of these "addendum" articles is added as a postscript at the bottom of the first article written under that title. Addendums can legally cite what their author can cite, not what the main article's author can cite.
* Occasionally, if more players make a citation to an index than there are open slots, the index will be over capacity. If the Editor is assigning players to indices in order, the Editor may need to shift players' index assignments around. This may also be useful for decreasing the number of Ersatz articles, if a player can't write in their assigned index but could write in another.

13
lexipython.py → lexipython/__main__.py Executable file → Normal file
View File

@ -8,9 +8,9 @@ import argparse
import os
import re
import json
from src.article import LexiconArticle
from src import build
from src import utils
from article import LexiconArticle
import build
import utils
def is_lexicon(name):
"""
@ -136,10 +136,13 @@ def command_init(name):
# Edit the name field
config = re.sub("Lexicon Title", "Lexicon {}".format(name), config)
# Create the Lexicon's config file
with open(os.path.join(lex_path, "lexicon.cfg"), "w") as config_file:
with open(os.path.join(lex_path, "lexicon.cfg"), "w", newline='') as config_file:
config_file.write(config)
# Copy the CSS file
with open(os.path.join(lex_path, "lexicon.css"), "w", newline='') as css_file:
css_file.write(utils.load_resource("lexicon.css"))
# Create an example page
with open(os.path.join(lex_path, "src", "example-page.txt"), "w") as destfile:
with open(os.path.join(lex_path, "src", "example-page.txt"), "w", newline='') as destfile:
destfile.write(utils.load_resource("example-page.txt"))
# Create an empty status file
open(os.path.join(lex_path, "status"), "w").close()

316
lexipython/article.py Normal file
View File

@ -0,0 +1,316 @@
import os
import sys
import re
import utils
class LexiconCitation:
"""
Represents information about a single citation in a Lexicon article.
Members:
id int: citation id within the article, corresponding to a "{cN}"
format hook
text string: alias text linked to the citation target
target string: title of the article being cited
article LexiconArticle: article cited, None until interlink
"""
def __init__(self, id, citation_text, citation_target, article=None):
self.id = id
self.text = citation_text
self.target = citation_target
self.article = article
def __repr__(self):
return "<LexiconCitation(id={0.id}, text=\"{0.text}\", target=\"{0.target}\")>".format(self)
def __str__(self):
return "<[{0.id}]:[[{0.text}|{0.target}]]>".format(self)
def format(self, format_str):
return format_str.format(**self.__dict__)
class LexiconArticle:
"""
A Lexicon article and its metadata.
Members defined by __init__:
player string: player who wrote the article
turn integer: turn the article was written for
title string: article title
title_filesafe string: title, escaped, used for filenames
content string: HTML content, with citations replaced by format hooks
citations list of LexiconCitations: citations made by the article
link_class string: CSS class to interpolate (for styling phantoms)
Members undefined until interlink:
addendums list of LexiconArticles: addendum articles to this article
citedby set of LexiconArticles: articles that cite this article
prev_article LexiconArticle: the previous article in read order
next_article LexiconArticle: the next article in read order
"""
def __init__(self, player, turn, title, content, citations):
"""
Creates a LexiconArticle object with the given parameters.
"""
self.player = player
self.turn = turn
self.title = title
self.title_filesafe = utils.titleescape(title)
self.content = content
self.citations = citations
self.link_class = "class=\"phantom\"" if player is None else ""
self.addendums = []
self.citedby = set()
self.prev_article = None
self.next_article = None
def __repr__(self):
return "<LexiconArticle(title={0.title}, turn={0.turn}, player={0.player})>".format(self)
def __str__(self):
return "<\"{0.title}\", {0.player} turn {0.turn}>".format(self)
@staticmethod
def from_file_raw(raw_content):
"""
Parses the contents of a Lexipython source file into a LexiconArticle
object. If the source file is malformed, returns None.
"""
headers = raw_content.split('\n', 3)
if len(headers) != 4:
print("Header read error")
return None
player_header, turn_header, title_header, content_raw = headers
# Validate and sanitize the player header
if not player_header.startswith("# Player:"):
print("Player header missing or corrupted")
return None
player = player_header[9:].strip()
# Validate and sanitize the turn header
if not turn_header.startswith("# Turn:"):
print("Turn header missing or corrupted")
return None
turn = None
try:
turn = int(turn_header[7:].strip())
except:
print("Turn header error")
return None
# Validate and sanitize the title header
if not title_header.startswith("# Title:"):
print("Title header missing or corrupted")
return None
title = utils.titlecase(title_header[8:])
# Parse the content and extract citations
paras = re.split("\n\n+", content_raw.strip())
content = ""
citations = []
format_id = 1
if not paras:
print("No content")
for para in paras:
# Escape angle brackets
para = re.sub("<", "&lt;", para)
para = re.sub(">", "&gt;", para)
# Escape curly braces
para = re.sub("{", "&#123;", para)
para = re.sub("}", "&#125;", para)
# Replace bold and italic marks with tags
para = re.sub(r"//([^/]+)//", r"<i>\1</i>", para)
para = re.sub(r"\*\*([^*]+)\*\*", r"<b>\1</b>", para)
# Replace \\LF with <br>LF
para = re.sub(r"\\\\\n", "<br>\n", para)
# Abstract citations into the citation record
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
while link_match:
# Identify the citation text and cited article
cite_text = link_match.group(2) if link_match.group(2) else link_match.group(3)
cite_title = utils.titlecase(re.sub(r"\s+", " ", link_match.group(3)))
# Record the citation
cite = LexiconCitation(format_id, cite_text, cite_title)
citations.append(cite)
# Stitch the format id in place of the citation
para = para[:link_match.start(0)] + "{c"+str(format_id)+"}" + para[link_match.end(0):]
format_id += 1 # Increment to the next format citation
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
# Convert signature to right-aligned
if para[:1] == '~':
para = "<hr><span class=\"signature\"><p>" + para[1:] + "</p></span>\n"
else:
para = "<p>" + para + "</p>\n"
content += para
return LexiconArticle(player, turn, title, content, citations)
@staticmethod
def parse_from_directory(directory):
"""
Reads and parses each source file in the given directory.
Input: directory, the path to the folder to read
Output: a list of parsed articles
"""
articles = []
print("Reading source files from", directory)
for filename in os.listdir(directory):
path = os.path.join(directory, filename)
# Read only .txt files
if filename[-4:] == ".txt":
print(" Parsing", filename)
with open(path, "r", encoding="utf8") as src_file:
raw = src_file.read()
article = LexiconArticle.from_file_raw(raw)
if article is None:
print(" ERROR")
else:
print(" success:", article.title)
articles.append(article)
return articles
@staticmethod
def interlink(lexicon_articles, config):
"""
Fills out fields on articles that require other articles for context.
Creates phantom articles.
"""
# Preliminary assertion that title/turn is unique
keys = set()
for article in lexicon_articles:
if config['ALLOW_ADDENDA'].lower() == "true":
key = (article.title, article.turn)
if key in keys:
raise ValueError("Found two articles with title '{}' and turn '{}'".format(
*key))
else:
key = article.title
if key in keys:
raise ValueError("Found two articles with title '{}'".format(
article.title))
keys.add(key)
# Sort out which articles are addendums and which titles are phantoms
written_titles = set()
cited_titles = set()
article_by_title = {}
written_articles_ordered = sorted(lexicon_articles, key=lambda a: (a.turn, a.title))
for written_article in written_articles_ordered:
# Track main articles by title
if written_article.title not in written_titles:
article_by_title[written_article.title] = written_article
written_titles.add(written_article.title)
# Append addendums to their parents
else:
parent = article_by_title[written_article.title]
parent.addendums.append(written_article)
# Collect all cited titles
for citation in written_article.citations:
cited_titles.add(citation.target)
# Create articles for each phantom title
for title in cited_titles - written_titles:
phantom_article = LexiconArticle(
None, sys.maxsize, title,
"<p><i>This entry hasn't been written yet.</i></p>", {})
article_by_title[title] = phantom_article
# To interlink the articles, each citation needs to have its .article
# filled in, and that article needs its citedby updated.
for parent in article_by_title.values():
under_title = [parent] + parent.addendums
for citing_article in under_title:
for citation in citing_article.citations:
target_article = article_by_title[citation.target]
citation.article = target_article
target_article.citedby.add(citing_article)
# Sort the articles by turn and title, then fill in prev/next fields
articles_ordered = sorted(article_by_title.values(), key=lambda a: (a.turn, utils.titlesort(a.title)))
for i in range(len(articles_ordered)):
articles_ordered[i].prev_article = articles_ordered[i-1] if i != 0 else None
articles_ordered[i].next_article = articles_ordered[i+1] if i != len(articles_ordered)-1 else None
return articles_ordered
def build_default_content(self):
"""
Builds the contents of the content div for an article page.
"""
content = ""
# Build the main article content block
main_body = self.build_default_article_body()
content += "<div class=\"contentblock\"><h1>{}</h1>{}</div>\n".format(
self.title, main_body)
# Build the main citation content block
main_citations = self.build_default_citeblock()
if main_citations:
content += "<div class=\"contentblock citeblock\">{}</div>\n".format(
main_citations)
# Build any addendum content blocks
for addendum in self.addendums:
add_body = addendum.build_default_article_body()
content += "<div class=\"contentblock\">{}</div>\n".format(add_body)
add_citations = addendum.build_default_citeblock()
if add_citations:
content += "<div class=\"contentblock\">{}</div>\n".format(
add_citations)
# Build the prev/next block
prev_next = self.build_prev_next_block(
self.prev_article, self.next_article)
if prev_next:
content += "<div class=\"contentblock citeblock\">{}</div>\n".format(
prev_next)
return content
def build_default_article_body(self):
"""
Formats citations into the article text and returns the article body.
"""
format_map = {
"c"+str(c.id) : c.format("<a {article.link_class} "\
"href=\"{article.title_filesafe}.html\">{text}</a>")
for c in self.citations
}
return self.content.format(**format_map)
def build_default_citeblock(self):
"""
Builds the contents of a citation contentblock. Skips sections with no
content.
"""
content = ""
# Citations
cites_titles = set()
cites_links = []
for citation in sorted(self.citations, key=lambda c: (utils.titlesort(c.target), c.id)):
if citation.target not in cites_titles:
cites_titles.add(citation.target)
cites_links.append(
citation.format(
"<a {article.link_class} href=\"{article.title_filesafe}.html\">{article.title}</a>"))
cites_str = " / ".join(cites_links)
if len(cites_str) > 0:
content += "<p>Citations: {}</p>\n".format(cites_str)
# Citedby
citedby_titles = set()
citedby_links = []
for article in sorted(self.citedby, key=lambda a: (utils.titlesort(a.title), a.turn)):
if article.title not in citedby_titles:
citedby_titles.add(article.title)
citedby_links.append(
"<a {0.link_class} href=\"{0.title_filesafe}.html\">{0.title}</a>".format(article))
citedby_str = " / ".join(citedby_links)
if len(citedby_str) > 0:
content += "<p>Cited by: {}</p>\n".format(citedby_str)
return content
def build_prev_next_block(self, prev_article, next_article):
"""
For each defined target, links the target page as Previous or Next.
"""
content = ""
# Prev/next links:
if next_article is not None or prev_article is not None:
prev_link = ("<a {0.link_class} href=\"{0.title_filesafe}.html\">&#8592; Previous</a>".format(
prev_article)
if prev_article is not None else "")
next_link = ("<a {0.link_class} href=\"{0.title_filesafe}.html\">Next &#8594;</a>".format(
next_article)
if next_article is not None else "")
content += "<table><tr>\n<td>{}</td>\n<td>{}</td>\n</tr></table>\n".format(
prev_link, next_link)
return content

501
lexipython/build.py Normal file
View File

@ -0,0 +1,501 @@
# Standard library imports
import os # For reading directories
import re # For parsing lex content
# Application imports
import utils
from article import LexiconArticle
from statistics import LexiconStatistics
class LexiconPage:
"""
An abstraction layer around formatting a Lexicon page skeleton with kwargs
so that kwargs that are constant across pages aren't repeated.
"""
def __init__(self, skeleton=None, page=None):
self.kwargs = {}
self.skeleton = skeleton
if page is not None:
self.skeleton = page.skeleton
self.kwargs = dict(page.kwargs)
def add_kwargs(self, **kwargs):
self.kwargs.update(kwargs)
def format(self, **kwargs):
total_kwargs = {**self.kwargs, **kwargs}
return self.skeleton.format(**total_kwargs)
def article_matches_index(index_type, pattern, article):
if index_type == "char":
return utils.titlesort(article.title)[0].upper() in pattern.upper()
if index_type == "prefix":
return article.title.startswith(pattern)
if index_type == "etc":
return True
raise ValueError("Unknown index type: '{}'".format(index_type))
def build_contents_page(config, page, articles):
"""
Builds the full HTML of the contents page.
"""
content = "<div class=\"contentblock\">"
# Head the contents page with counts of written and phantom articles
phantom_count = len([article for article in articles if article.player is None])
if phantom_count == 0:
content += "<p>There are <b>{0}</b> entries in this lexicon.</p>\n".format(len(articles))
else:
content += "<p>There are <b>{0}</b> entries, <b>{1}</b> written and <b>{2}</b> phantom.</p>\n".format(
len(articles), len(articles) - phantom_count, phantom_count)
# Prepare article links
link_by_title = {article.title : "<a href=\"../article/{1}.html\"{2}>{0}</a>".format(
article.title, article.title_filesafe,
" class=\"phantom\"" if article.player is None else "")
for article in articles}
# Determine index order
indices = config['INDEX_LIST'].split("\n")
index_by_pri = {}
index_list_order = []
for index in indices:
match = re.match(r"([^[:]+)(\[([-\d]+)\])?:(.+)", index)
index_type = match.group(1)
pattern = match.group(4)
try:
pri_s = match.group(3)
pri = int(pri_s) if pri_s else 0
except:
raise TypeError("Could not parse index pri '{}' in '{}'".format(pri_s, index))
if pri not in index_by_pri:
index_by_pri[pri] = []
index_by_pri[pri].append((index_type, pattern))
index_list_order.append(pattern)
# Assign articles to indices
articles_by_index = {pattern: [] for pattern in index_list_order}
titlesort_order = sorted(
articles,
key=lambda a: utils.titlesort(a.title))
for article in titlesort_order:
# Find the first index that matches
matched = False
for pri, indices in sorted(index_by_pri.items(), reverse=True):
for index_type, pattern in indices:
# Try to match the index
if article_matches_index(index_type, pattern, article):
articles_by_index[pattern].append(article)
matched = True
# Break out once a match is found
if matched:
break
if matched:
break
if not matched:
raise KeyError("No index matched article '{}'".format(article.title))
# Write index order div
content += utils.load_resource("contents.html")
content += "<div id=\"index-order\" style=\"display:{}\">\n<ul>\n".format(
"block" if config["DEFAULT_SORT"] == "index" else "none")
for pattern in index_list_order:
# Write the index header
content += "<h3>{0}</h3>\n".format(pattern)
# Write all matches articles
for article in articles_by_index[pattern]:
content += "<li>{}</li>\n".format(link_by_title[article.title])
content += "</ul>\n</div>\n"
# Write turn order div
content += "<div id=\"turn-order\" style=\"display:{}\">\n<ul>\n".format(
"block" if config["DEFAULT_SORT"] == "turn" else "none")
turn_numbers = [article.turn for article in articles if article.player is not None]
first_turn, last_turn = min(turn_numbers), max(turn_numbers)
turn_order = sorted(
articles,
key=lambda a: (a.turn, utils.titlesort(a.title)))
check_off = list(turn_order)
for turn_num in range(first_turn, last_turn + 1):
content += "<h3>Turn {0}</h3>\n".format(turn_num)
for article in turn_order:
if article.turn == turn_num:
check_off.remove(article)
content += "<li>{}</li>\n".format(link_by_title[article.title])
if len(check_off) > 0:
content += "<h3>Unwritten</h3>\n"
for article in check_off:
content += "<li>{}</li>\n".format(link_by_title[article.title])
content += "</ul>\n</div>\n"
# Write by-player div
content += "<div id=\"player-order\" style=\"display:{}\">\n<ul>\n".format(
"block" if config["DEFAULT_SORT"] == "player" else "none")
articles_by_player = {}
extant_phantoms = False
for article in turn_order:
if article.player is not None:
if article.player not in articles_by_player:
articles_by_player[article.player] = []
articles_by_player[article.player].append(article)
else:
extant_phantoms = True
for player, player_articles in sorted(articles_by_player.items()):
content += "<h3>{0}</h3>\n".format(player)
for article in player_articles:
content += "<li>{}</li>\n".format(link_by_title[article.title])
if extant_phantoms:
content += "<h3>Unwritten</h3>\n"
for article in titlesort_order:
if article.player is None:
content += "<li>{}</li>\n".format(link_by_title[article.title])
content += "</ul>\n</div>\n"
content += "</div>\n"
# Fill in the page skeleton
return page.format(title="Index", content=content)
def build_rules_page(page):
"""
Builds the full HTML of the rules page.
"""
content = utils.load_resource("rules.html")
# Fill in the entry skeleton
return page.format(title="Rules", content=content)
def build_formatting_page(page):
"""
Builds the full HTML of the formatting page.
"""
content = utils.load_resource("formatting.html")
# Fill in the entry skeleton
return page.format(title="Formatting", content=content)
def build_session_page(page, config):
"""
Builds the full HTML of the session page.
"""
# Misc links
content = '<div class="contentblock misclinks"><table><tr>\n'
content += '<td><a href="../editor.html">Editor</a></td>\n'
if config['SEARCHABLE_FILE']:
content += '<td><a href="../{}">Compiled</a></td>\n'.format(config['SEARCHABLE_FILE'].strip())
content += '</tr></table></div>\n'
# Session content
content += "<div class=\"contentblock\">{}</div>".format(config["SESSION_PAGE"])
return page.format(title="Session", content=content)
def build_statistics_page(config, page, articles):
# Read the config file for which stats to publish.
lines = config['STATISTICS'].split("\n")
stats = []
for line in lines:
stat, toggle = line.split()
if toggle == "on":
stats.append("stat_" + stat)
# Create all the stats blocks.
lexicon_stats = LexiconStatistics(articles)
stats_blocks = []
for stat in stats:
if hasattr(lexicon_stats, stat):
stats_blocks.append(getattr(lexicon_stats, stat)())
else:
print("ERROR: Bad stat {}".format(stat))
content = "\n".join(stats_blocks)
# Fill in the entry skeleton
return page.format(title="Statistics", content=content)
def build_graphviz_file(cite_map):
"""
Builds a citation graph in dot format for Graphviz.
"""
result = []
result.append("digraph G {\n")
# Node labeling
written_entries = list(cite_map.keys())
phantom_entries = set([title for cites in cite_map.values() for title in cites if title not in written_entries])
node_labels = [title[:20] for title in written_entries + list(phantom_entries)]
node_names = [hash(i) for i in node_labels]
for i in range(len(node_labels)):
result.append("{} [label=\"{}\"];\n".format(node_names[i], node_labels[i]))
# Edges
for citer in written_entries:
for cited in cite_map[citer]:
result.append("{}->{};\n".format(hash(citer[:20]), hash(cited[:20])))
# Return result
result.append("overlap=false;\n}\n")
return "".join(result)#"…"
def build_compiled_page(articles, config):
"""
Builds a page compiling all articles in the Lexicon.
"""
articles = sorted(
articles,
key=lambda a: (utils.titlesort(a.title)))
# Write the header
content = "<html><head><title>{}</title>"\
"<style>span.signature {{ text-align: right; }} "\
"sup {{ vertical-align: top; font-size: 0.6em; }} "\
"u {{ text-decoration-color: #888888; }}</style>"\
"</head><body>\n".format(config["LEXICON_TITLE"])
# Write each article
for article in articles:
# Article title
content += "<div style=\"page-break-inside:avoid;\"><h2>{0.title}</h2>".format(article)
# Article content
format_map = {
"c"+str(c.id) : c.format("<u>{text}</u><sup>{id}</sup>")
for c in article.citations
}
article_content = article.content.format(**format_map)
article_content = article_content.replace("</p>", "</p></div>", 1)
content += article_content
# Article citations
cite_list = "<br>".join(
c.format("{id}. {target}")
for c in article.citations)
cite_block = "<p>{}</p>".format(cite_list)
content += cite_block
# Addendums
for addendum in article.addendums:
# Addendum content
format_map = {
"c"+str(c.id) : c.format("<u>{text}</u><sup>{id}</sup>")
for c in addendum.citations
}
article_content = addendum.content.format(**format_map)
content += article_content
# Addendum citations
cite_list = "<br>".join(
c.format("{id}. {target}")
for c in addendum.citations)
cite_block = "<p>{}</p>".format(cite_list)
content += cite_block
content += "</body></html>"
return content
def latex_from_markdown(raw_content):
content = ""
headers = raw_content.split('\n', 3)
player_header, turn_header, title_header, content_raw = headers
if not turn_header.startswith("# Turn:"):
print("Turn header missing or corrupted")
return None
turn = int(turn_header[7:].strip())
if not title_header.startswith("# Title:"):
print("Title header missing or corrupted")
return None
title = utils.titlecase(title_header[8:])
#content += "\\label{{{}}}\n".format(title)
#content += "\\section*{{{}}}\n\n".format(title)
# Parse content
paras = re.split("\n\n+", content_raw.strip())
for para in paras:
# Escape things
para = re.sub("&mdash;", "---", para)
para = re.sub("&", "\\&", para)
para = re.sub(r"\"(?=\w)", "``", para)
para = re.sub(r"(?<=\w)\"", "''", para)
# Replace bold and italic marks with commands
para = re.sub(r"//([^/]+)//", r"\\textit{\1}", para)
para = re.sub(r"\*\*([^*]+)\*\*", r"\\textbf{\1}", para)
# Footnotify citations
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
while link_match:
# Identify the citation text and cited article
cite_text = link_match.group(2) if link_match.group(2) else link_match.group(3)
cite_title = utils.titlecase(re.sub(r"\s+", " ", link_match.group(3)))
# Stitch the title into a footnote
para = (para[:link_match.start(0)] + cite_text + "\\footnote{" +
cite_title +
", p. \\pageref{" + str(hash(cite_title)) + "}" +
"}" + para[link_match.end(0):])
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
# Convert signature to right-aligned
if para[:1] == '~':
para = "\\begin{flushright}\n" + para[1:] + "\n\\end{flushright}\n\n"
else:
para = para + "\n\n"
content += para
return title, turn, content
def latex_from_directory(directory):
articles = {}
for filename in os.listdir(directory):
path = os.path.join(directory, filename)
# Read only .txt files
if filename[-4:] == ".txt":
with open(path, "r", encoding="utf8") as src_file:
raw = src_file.read()
title, turn, latex = latex_from_markdown(raw)
if title not in articles:
articles[title] = {}
articles[title][turn] = latex
# Write the preamble
content = "\\documentclass[12pt,a4paper,twocolumn,twoside]{article}\n"\
"\\usepackage[perpage]{footmisc}\n"\
"\\begin{document}\n"\
"\n"
for title in sorted(articles.keys(), key=lambda t: utils.titlesort(t)):
under_title = articles[title]
turns = sorted(under_title.keys())
latex = under_title[turns[0]]
# Section header
content += "\\label{{{}}}\n".format(hash(title))
content += "\\section*{{{}}}\n\n".format(title)
# Section content
#format_map = {
# "c"+str(c.id) : c.format("\\footnote{{{target}}}")
# for c in article.citations
#}
#article_content = article.content.format(**format_map)
content += latex
# Addendums
for turn in turns[1:]:
#content += "\\vspace{6pt}\n\\hrule\n\\vspace{6pt}\n\n"
content += "\\begin{center}\n$\\ast$~$\\ast$~$\\ast$\n\\end{center}\n\n"
latex = under_title[turn]
#format_map = {
# "c"+str(c.id) : c.format("\\footnote{{{target}}}")
# for c in addendum.citations
#}
#article_content = addendum.content.format(**format_map)
content += latex
content += "\\end{document}"
content = re.sub(r"\"(?=\w)", "``", content)
content = re.sub(r"(?<=\w)\"", "''", content)
return content
def parse_sort_type(sort):
if sort in "?byindex":
return "?byindex"
if sort in "?byturn":
return "?byturn"
if sort in "?byplayer":
return "?byplayer"
return ""
def build_all(path_prefix, lexicon_name):
"""
Builds all browsable articles and pages in the Lexicon.
"""
lex_path = os.path.join(path_prefix, lexicon_name)
# Load the Lexicon's peripherals
config = utils.load_config(lexicon_name)
page_skeleton = utils.load_resource("page-skeleton.html")
page = LexiconPage(skeleton=page_skeleton)
page.add_kwargs(
lexicon=config["LEXICON_TITLE"],
logo=config["LOGO_FILENAME"],
prompt=config["PROMPT"],
sort=parse_sort_type(config["DEFAULT_SORT"]))
# Parse the written articles
articles = LexiconArticle.parse_from_directory(os.path.join(lex_path, "src"))
# Once they've been populated, the articles list has the titles of all articles
# Sort this by turn before title so prev/next links run in turn order
articles = sorted(
LexiconArticle.interlink(articles, config),
key=lambda a: (a.turn, utils.titlesort(a.title)))
def pathto(*els):
return os.path.join(lex_path, *els)
# Write the redirect page
print("Writing redirect page...")
with open(pathto("index.html"), "w", encoding="utf8", newline='') as f:
f.write(utils.load_resource("redirect.html").format(
lexicon=config["LEXICON_TITLE"], sort=parse_sort_type(config["DEFAULT_SORT"])))
# Write the article pages
print("Deleting old article pages...")
for filename in os.listdir(pathto("article")):
if filename[-5:] == ".html":
os.remove(pathto("article", filename))
print("Writing article pages...")
l = len(articles)
for idx in range(l):
article = articles[idx]
with open(pathto("article", article.title_filesafe + ".html"), "w", encoding="utf-8", newline='') as f:
content = article.build_default_content()
article_html = page.format(
title = article.title,
content = content)
f.write(article_html)
print(" Wrote " + article.title)
# Write default pages
print("Writing default pages...")
with open(pathto("contents", "index.html"), "w", encoding="utf-8", newline='') as f:
f.write(build_contents_page(config, page, articles))
print(" Wrote Contents")
with open(pathto("rules", "index.html"), "w", encoding="utf-8", newline='') as f:
f.write(build_rules_page(page))
print(" Wrote Rules")
with open(pathto("formatting", "index.html"), "w", encoding="utf-8", newline='') as f:
f.write(build_formatting_page(page))
print(" Wrote Formatting")
with open(pathto("session", "index.html"), "w", encoding="utf-8", newline='') as f:
f.write(build_session_page(page, config))
print(" Wrote Session")
with open(pathto("statistics", "index.html"), "w", encoding="utf-8", newline='') as f:
f.write(build_statistics_page(config, page, articles))
print(" Wrote Statistics")
# Write auxiliary pages
if "SEARCHABLE_FILE" in config and config["SEARCHABLE_FILE"]:
with open(pathto(config["SEARCHABLE_FILE"]), "w", encoding="utf-8", newline='') as f:
f.write(build_compiled_page(articles, config))
print(" Wrote compiled page to " + config["SEARCHABLE_FILE"])
with open(pathto("editor.html"), "w", encoding="utf-8", newline='') as f:
editor = utils.load_resource("editor.html")
writtenArticles = ""
phantomArticles = ""
for article in articles:
citedby = {'"' + citer.player + '"' for citer in article.citedby}
if article.player is None:
phantomArticles += "{{title: \"{0}\", citedby: [{1}]}},".format(
article.title.replace("\"", "\\\""),
",".join(sorted(citedby)))
else:
writtenArticles += "{{title: \"{0}\", author: \"{1.player}\"}},".format(
article.title.replace("\"", "\\\""), article)
nextTurn = 0
if articles:
nextTurn = max([article.turn for article in articles if article.player is not None]) + 1
editor = editor.replace("//writtenArticles", writtenArticles)
editor = editor.replace("//phantomArticles", phantomArticles)
editor = editor.replace("TURNNUMBER", str(nextTurn))
f.write(editor)
# Check that authors aren't citing themselves
print("Running citation checks...")
for parent in articles:
for article in [parent] + parent.addendums:
for citation in article.citations:
if article.player == citation.article.player:
print(" {2}: {0} cites {1}".format(article.title, citation.target, article.player))
print()

View File

View File

@ -0,0 +1,55 @@
<script type="text/javascript">
const order = {
INDEX: "index",
TURN: "turn",
PLAYER: "player",
}
var currentOrder = order.INDEX;
setOrder = function(orderType)
{
if (orderType == order.INDEX) {
document.getElementById("index-order").style.display = "block";
document.getElementById("turn-order").style.display = "none";
document.getElementById("player-order").style.display = "none";
document.getElementById("toggle-button").innerText = "Switch to turn order";
currentOrder = order.INDEX;
}
else if (orderType == order.TURN) {
document.getElementById("index-order").style.display = "none";
document.getElementById("turn-order").style.display = "block";
document.getElementById("player-order").style.display = "none";
document.getElementById("toggle-button").innerText = "Switch to player order";
currentOrder = order.TURN;
}
else if (orderType == order.PLAYER) {
document.getElementById("index-order").style.display = "none";
document.getElementById("turn-order").style.display = "none";
document.getElementById("player-order").style.display = "block";
document.getElementById("toggle-button").innerText = "Switch to index order";
currentOrder = order.PLAYER;
}
}
contentsToggle = function() {
if (currentOrder == order.INDEX)
setOrder(order.TURN);
else if (currentOrder == order.TURN)
setOrder(order.PLAYER);
else if (currentOrder == order.PLAYER)
setOrder(order.INDEX);
}
window.onload = function(){
if (location.search.search("byindex") > 0)
{
setOrder(order.INDEX);
}
if (location.search.search("byturn") > 0)
{
setOrder(order.TURN);
}
if (location.search.search("byplayer") > 0)
{
setOrder(order.PLAYER);
}
}
</script>
<button id="toggle-button" onClick="javascript:contentsToggle()">Switch to turn order</button>

View File

@ -0,0 +1,179 @@
<html>
<head>
<title>Lexicon Editor</title>
<style>
html, body { height:100%; margin:0px; }
div.outer { overflow:overlay; }
span.signature { text-align: right; }
a.phantom { color: #ff0000; }
a.denovo { color: #008800; }
@media only screen and (min-width: 768px) {
div.column { float:left; width:50%; }
}
</style>
<script>
writtenArticles = [
//writtenArticles
]
phantomArticles = [
//phantomArticles
]
function updatePreview() {
var articlePlayer = document.getElementById("article-player").value;
var articleTitle = document.getElementById("article-title").value;
var articleBody = document.getElementById("article-body").value;
var previewHtml = "<h1>" + articleTitle + "</h1>\n";
if (phantomArticles.some(e => (e.title === articleTitle && e.citedby.some(p => (p === articlePlayer))))) {
previewHtml += "<p><span style=\"color:#dd0000\">You've cited this article!</span></p>"
}
previewHtml += parseLexipythonMarkdown(articleBody);
document.getElementById("preview").innerHTML = previewHtml;
}
function parseLexipythonMarkdown(text) {
// Parse the content and extract citations
var paras = text.trim().split(/\n\n+/);
content = "";
citationList = [];
formatId = 1;
hasSignature = false;
for (var i = 0; i < paras.length; i++) {
var para = paras[i];
// Escape angle brackets
para = para.replace(/</g, "&lt;");
para = para.replace(/>/g, "&gt;");
// Replace bold and italic marks with tags
para = para.replace(/\/\/([^\/]+)\/\//g, "<i>$1</i>");
para = para.replace(/\*\*([^*]+)\*\*/g, "<b>$1</b>");
// Replace \\LF with <br>LF
para = para.replace(/\\\\\n/g, "<br>\n");
// Abstract citations into the citation record
linkMatch = para.match(/\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]/);
while (linkMatch != null) {
// Identify the citation text and cited article
citeText = linkMatch[2] != null ? linkMatch[2] : linkMatch[3];
citeTitle = linkMatch[3].charAt(0).toUpperCase() + linkMatch[3].slice(1);
citeClass = "class=\"denovo\"";
if (writtenArticles.some(function(e) { return e.title === citeTitle; })) {
citeClass = ""
} else if (phantomArticles.some(function(e) { return e.title === citeTitle; })) {
citeClass = "class=\"phantom\"";
}
// Record the citation
citationList.push([formatId, citeTitle]);
// Stitch the cite text in place of the citation, plus a cite number
para =
para.slice(0, linkMatch.index) +
"<a " +
citeClass +
" href=\"#\">" +
citeText +
"</a>" +
"<sup>" +
formatId.toString() +
"</sup>" +
para.slice(linkMatch.index + linkMatch[0].length);
formatId += 1; // Increment to the next format id
linkMatch = para.match(/\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]/);
}
// Convert signature to right-aligned
if (para.length > 0 && para[0] == "~") {
para = "<hr><span class=\"signature\"><p>" + para.slice(1) + "</p></span>";
hasSignature = true;
} else {
para = "<p>" + para + "</p>\n";
}
content += para;
}
if (!hasSignature) {
content += "<p><span style=\"color:#dd0000\">Article has no signature</span></p>";
}
if (citationList.length > 0) {
var player = document.getElementById("article-player").value;
content += "<p><i>The following articles will be cited:</i></p>\n";
for (var i = 0; i < citationList.length; i++) {
citation = citationList[i][0].toString() + ". " + citationList[i][1];
if (writtenArticles.some(e => (e.title === citationList[i][1]) && (e.author === player))) {
content += "<p><span style=\"color:#ff0000\">" + citation + " [Written by you!]</span></p>";
} else if (writtenArticles.some(e => (e.title === citationList[i][1]))) {
content += "<p>" + citation + " [Written]";
} else if (phantomArticles.some(e => (e.title === citationList[i][1]))) {
content += "<p>" + citation + " [Phantom]";
} else {
content += "<p>" + citation + " [New]";
}
content += "</p>\n";
}
}
// Calculate approximate word count
var wordCount = text.trim().split(/\s+/).length;
if (text.trim().length < 1)
wordCount = 0;
content += "<p><i>Article length: approx. " + wordCount + " words</p>";
return content;
}
function download() {
var articlePlayer = document.getElementById("article-player").value;
var articleTurn = document.getElementById("article-turn").value;
var articleTitle = document.getElementById("article-title").value;
var articleBody = document.getElementById("article-body").value;
var articleText =
"# Player: " + articlePlayer + "\n" +
"# Turn: " + articleTurn + "\n" +
"# Title: " + articleTitle + "\n" +
"\n" +
articleBody;
var articleFilename = articleTitle.toLowerCase().replace(/[^a-z0-9- ]/g, "").replace(/ +/g, "-");
articleFilename += "-" + articleTurn.toString();
var downloader = document.createElement("a");
downloader.setAttribute("href", "data:text/plain;charset=utf-8," + encodeURIComponent(articleText));
downloader.setAttribute("download", articleFilename);
if (document.createEvent) {
var event = document.createEvent("MouseEvents");
event.initEvent("click", true, true);
downloader.dispatchEvent(event);
} else {
downloader.click();
}
}
window.onload = updatePreview;
window.addEventListener("beforeunload", function(e) {
var hasText = document.getElementById("article-body").value.length > 0;
if (hasText) {
e.returnValue = "Are you sure?";
}
});
</script>
</head>
<body>
<center>
<h1>Lexicon Editor</h1>
</center>
<button onclick="download()">Download as .txt</button>
<div class="outer">
<div class="column">
<table style="width:100%">
<tr><td># Player:</td>
<td><input id="article-player" style="width:100%;" value="PN" oninput="updatePreview()"/></td>
</tr>
<tr><td># Turn:</td>
<td><input id="article-turn" style="width:100%" value="TURNNUMBER"/></td>
</tr>
<tr><td># Title:</td>
<td><input id="article-title" style="width:100%" value="Example Page" oninput="updatePreview()" /></td>
</tr>
<tr><td colspan="2">
<textarea id="article-body" style="width:100%; resize:vertical" rows=8 oninput="updatePreview()"></textarea>
</td></tr></table>
</div>
<div class="column">
<div id="preview" style="padding: 0 10px"></div>
</div>
</div>
</body>
</html>

View File

@ -0,0 +1,86 @@
# LEXIPYTHON CONFIG FILE
#
# This file defines the configuration values for an instance of Lexipython.
# The title of the Lexicon game, displayed at the top of each entry.
>>>LEXICON_TITLE>>>
Lexicon Title
<<<LEXICON_TITLE<<<
# The sidebar image. Constrained to 140-px.
>>>LOGO_FILENAME>>>
logo.png
<<<LOGO_FILENAME<<<
# The prompt for the Lexicon. Will be read as HTML and inserted into the
# header directly.
>>>PROMPT>>>
<i>Prompt goes here</i>
<<<PROMPT<<<
# Session page content. Will be read as HTML and inserted into the body of
# the session page directly.
>>>SESSION_PAGE>>>
<p>Put session information here, like the index grouping and turn count, where to send completed entries, index assignments, turn schedule, and so on.</p>
<<<SESSION_PAGE<<<
# Index headers.
# An index header is declared as id:pattern or id[pri]:pattern. An article is
# sorted under the first index it matches. Matches are checked in descending
# order of pri, and in list order for indices of equal pri. An undefined pri
# value is 0. After matching is done, indices are written in list order
# regardless of pri. Index patterns must be unique, regardless of index type.
# A character index has id "char". An article matches a character index if the
# first letter of its title is one of the characters in the index's pattern.
# A prefix index has id "prefix". An article matches a prefix index if the
# title begins with the pattern.
# The etc index has id "etc". An article always matches the etc index. The
# pattern is used as the index display name.
>>>INDEX_LIST>>>
char:ABC
char:DEF
char:GHI
char:JKL
char:MNO
char:PQRS
char:TUV
char:WXYZ
etc:&c.
<<<INDEX_LIST<<<
# Toggles and order for whichs tatistics to display.
# Pagerank-based statistics require networkx to be installed.
>>>STATISTICS>>>
top_pagerank on
most_citations_made on
most_citations_to on
longest_article on
cumulative_wordcount off
player_pagerank on
player_citations_made on
player_citations_to on
bottom_pagerank off
undercited off
<<<STATISTICS<<<
# The default sorting to use on the contents page.
# Allowed values are "index", "turn", and "player"
>>>DEFAULT_SORT>>>
index
<<<DEFAULT_SORT<<<
# Flag to enable addendum articles. If enabled, articles with the same title
# and a later turn than another article will be appended to that article.
>>>ALLOW_ADDENDA>>>
False
<<<ALLOW_ADDENDA<<<
# Graphviz file name. If present, the graph of page citations will be written
# in the .dot format.
>>>GRAPHVIZ_FILE>>>
<<<GRAPHVIZ_FILE<<<
# Searchable version file name. If present, the lexicon will be compiled and
# written into a single, easily-searchable HTML file.
>>>SEARCHABLE_FILE>>>
<<<SEARCHABLE_FILE<<<

View File

@ -0,0 +1,117 @@
body {
background-color: #eeeeee;
line-height: 1.4;
font-size: 16px;
}
div#wrapper {
max-width: 800px;
position: absolute;
left: 0;
right: 0;
margin: 0 auto;
}
div#header {
padding: 5px;
margin: 5px;
background-color: #ffffff;
box-shadow: 2px 2px 10px #888888;
border-radius: 5px;
}
div#header p, div#header h2 {
margin: 5px;
}
div#sidebar {
width: 200px;
float:left;
margin:5px;
padding: 8px;
text-align: center;
background-color: #ffffff;
box-shadow: 2px 2px 10px #888888;
border-radius: 5px;
}
img#logo {
max-width: 200px;
}
table {
table-layout: fixed;
width: 100%;
}
div#sidebar table {
border-collapse: collapse;
}
div.citeblock table td:first-child + td a {
justify-content: flex-end;
}
div.misclinks table td a {
justify-content: center;
}
table a {
display: flex;
padding: 3px;
background-color: #dddddd;
border-radius: 5px;
text-decoration: none;
}
div#sidebar table a {
justify-content: center;
}
table a:hover {
background-color: #cccccc;
}
div#sidebar table td {
padding: 0px; margin: 3px 0;
border-bottom: 8px solid transparent;
}
div#content {
position: absolute;
right: 0px;
left: 226px;
max-width: 564px;
margin: 5px;
}
div.contentblock {
background-color: #ffffff;
box-shadow: 2px 2px 10px #888888;
margin-bottom: 5px;
padding: 10px;
width: auto;
border-radius: 5px;
}
div.contentblock h3 {
margin: 0.3em 0;
}
a.phantom {
color: #cc2200;
}
div.citeblock a.phantom {
font-style: italic;
}
span.signature {
text-align: right;
}
@media only screen and (max-width: 816px) {
div#wrapper {
padding: 5px;
}
div#header {
max-width: 554;
margin: 0 auto;
}
div#sidebar {
max-width: 548;
width: inherit;
float: inherit;
margin: 5px auto;
}
div#content {
max-width: 564;
position: static;
right: inherit;
margin: 5px auto;
}
img#logo {
max-width: inherit;
width: 100%;
}
}

View File

@ -55,7 +55,7 @@
<li>As the game goes on, it may come to pass that a player must write an
article in an index, but that index is full, and that player has already
cited all the phantoms in it. When this happens, the player instead writes
their article as **Ersatz Scrivener**, radical skeptic. Ersatz does not
their article as <b>Ersatz Scrivener</b>, radical skeptic. Ersatz does not
believe in the existence of whatever he is writing about, no matter how
obvious it seems to others or how central it is in the developing history
of the world. For Ersatz, all references, testimony, etc. with regard to

317
lexipython/statistics.py Normal file
View File

@ -0,0 +1,317 @@
# Third party imports
try:
import networkx # For pagerank analytics
NETWORKX_ENABLED = True
except:
NETWORKX_ENABLED = False
# Application imports
from utils import titlesort
def reverse_statistics_dict(stats, reverse=True):
"""
Transforms a dictionary mapping titles to a value into a list of values
and lists of titles. The list is sorted by the value, and the titles are
sorted alphabetically.
"""
rev = {}
for key, value in stats.items():
if value not in rev:
rev[value] = []
rev[value].append(key)
for key, value in rev.items():
rev[key] = sorted(value, key=lambda t: titlesort(t))
return sorted(rev.items(), key=lambda x:x[0], reverse=reverse)
def itemize(stats_list):
"""
Formats a list consisting of tuples of ranks and lists of ranked items.
"""
return map(lambda x: "{0} &ndash; {1}".format(x[0], "; ".join(x[1])), stats_list)
class LexiconStatistics():
"""
A wrapper for a persistent statistics context with some precomputed
values around for convenience.
The existence of addendum articles complicates how some statistics are
computed. An addendum is an article, with its own author, body, and
citations, but in a Lexicon it exists appended to another article. To handle
this, we distinguish an _article_ from a _page_. An article is a unit parsed
from a single source file. A page is a main article and all addendums under
the same title.
"""
def __init__(self, articles):
self.articles = articles
self.min_turn = 0
self.max_turn = 0
self.players = set()
self.title_to_article = {}
self.title_to_page = {}
self.stat_block = "<div class=\"contentblock\"><u>{0}</u><br>{1}</div>\n"
# Pagerank may not be computable if networkx isn't installed.
self.title_to_pagerank = None
for main_article in articles:
page_title = main_article.title
self.title_to_page[page_title] = [main_article]
self.title_to_page[page_title].extend(main_article.addendums)
for article in self.title_to_page[page_title]:
# Disambiguate articles by appending turn number to the title
key = "{0.title} (T{0.turn})".format(article)
self.title_to_article[key] = article
if article.player is not None:
# Phantoms have turn MAXINT by convention
self.min_turn = min(self.min_turn, article.turn)
self.max_turn = max(self.max_turn, article.turn)
self.players.add(article.player)
def _try_populate_pagerank(self):
"""Computes pagerank if networkx is imported."""
if NETWORKX_ENABLED and self.title_to_pagerank is None:
# Create a citation graph linking page titles.
G = networkx.Graph()
for page_title, articles in self.title_to_page.items():
for article in articles:
for citation in article.citations:
G.add_edge(page_title, citation.target)
# Compute pagerank on the page citation graph.
self.title_to_pagerank = networkx.pagerank(G)
# Any article with no links in the citation graph have no pagerank.
# Assign these pagerank 0 to avoid key errors or missing pages in
# the stats.
for page_title, articles in self.title_to_page.items():
if page_title not in self.title_to_pagerank:
self.title_to_pagerank[page_title] = 0
def stat_top_pagerank(self):
"""Computes the top 10 pages by pagerank."""
self._try_populate_pagerank()
if not self.title_to_pagerank:
# If networkx was not successfully imported, skip the pagerank.
top_ranked_items = "networkx must be installed to compute pageranks."
else:
# Get the top ten articles by pagerank.
top_pageranks = reverse_statistics_dict(self.title_to_pagerank)[:10]
# Replace the pageranks with ordinals.
top_ranked = enumerate(map(lambda x: x[1], top_pageranks), start=1)
# Format the ranks into strings.
top_ranked_items = itemize(top_ranked)
# Format the statistics block.
return self.stat_block.format(
"Top 10 articles by page rank:",
"<br>".join(top_ranked_items))
def stat_most_citations_made(self):
"""Computes the top 3 ranks for citations made FROM a page."""
# Determine which pages are cited from all articles on a page.
pages_cited = {
page_title: set()
for page_title in self.title_to_page.keys()}
for page_title, articles in self.title_to_page.items():
for article in articles:
for citation in article.citations:
pages_cited[page_title].add(citation.target)
# Compute the number of unique articles cited by a page.
for page_title, cite_titles in pages_cited.items():
pages_cited[page_title] = len(cite_titles)
# Reverse and itemize the citation counts.
top_citations = reverse_statistics_dict(pages_cited)[:3]
top_citations_items = itemize(top_citations)
# Format the statistics block.
return self.stat_block.format(
"Cited the most pages:",
"<br>".join(top_citations_items))
def stat_most_citations_to(self):
"""Computes the top 3 ranks for citations made TO a page."""
# Determine which pages cite a page.
pages_cited_by = {
page_title: set()
for page_title in self.title_to_page.keys()}
for page_title, articles in self.title_to_page.items():
for article in articles:
for citation in article.citations:
pages_cited_by[citation.target].add(page_title)
# Compute the number of unique articles that cite a page.
for page_title, cite_titles in pages_cited_by.items():
pages_cited_by[page_title] = len(cite_titles)
# Reverse and itemize the citation counts.
top_cited = reverse_statistics_dict(pages_cited_by)[:3]
top_cited_items = itemize(top_cited)
# Format the statistics block.
return self.stat_block.format(
"Cited by the most pages:",
"<br>".join(top_cited_items))
def stat_longest_article(self):
"""Computes the top 3 longest articles."""
# Compute the length of each article (not page).
title_to_article_length = {}
for article_title, article in self.title_to_article.items():
# Write all citation aliases into the article text to accurately
# compute word count as written.
format_map = {
"c"+str(c.id): c.text
for c in article.citations
}
plain_content = article.content.format(**format_map)
word_count = len(plain_content.split())
title_to_article_length[article_title] = word_count
# Reverse and itemize the article lengths.
top_length = reverse_statistics_dict(title_to_article_length)[:3]
top_length_items = itemize(top_length)
# Format the statistics block.
return self.stat_block.format(
"Longest articles:",
"<br>".join(top_length_items))
def stat_cumulative_wordcount(self):
"""Computes the cumulative word count of the lexicon."""
# Initialize all extant turns to 0.
turn_to_cumulative_wordcount = {
turn_num: 0
for turn_num in range(self.min_turn, self.max_turn + 1)
}
for article_title, article in self.title_to_article.items():
# Compute each article's word count.
format_map = {
"c"+str(c.id): c.text
for c in article.citations
}
plain_content = article.content.format(**format_map)
word_count = len(plain_content.split())
# Add the word count to each turn the article exists in.
for turn_num in range(self.min_turn, self.max_turn + 1):
if article.turn <= turn_num:
turn_to_cumulative_wordcount[turn_num] += word_count
# Format the statistics block.
len_list = [(str(k), [str(v)]) for k,v in turn_to_cumulative_wordcount.items()]
return self.stat_block.format(
"Aggregate word count by turn:",
"<br>".join(itemize(len_list)))
def stat_player_pagerank(self):
"""Computes each player's share of the lexicon's pagerank scores."""
self._try_populate_pagerank()
if not self.title_to_pagerank:
# If networkx was not successfully imported, skip the pagerank.
player_rank_items = "networkx must be installed to compute pageranks."
else:
player_to_pagerank = {
player: 0
for player in self.players}
# Accumulate page pagerank to the main article's author.
for page_title, articles in self.title_to_page.items():
page_author = articles[0].player
if page_author is not None:
player_to_pagerank[page_author] += self.title_to_pagerank[page_title]
# Round pageranks off to 3 decimal places.
for player, pagerank in player_to_pagerank.items():
player_to_pagerank[player] = round(pagerank, 3)
# Reverse and itemize the aggregated pageranks.
player_rank = reverse_statistics_dict(player_to_pagerank)
player_rank_items = itemize(player_rank)
# Format the statistics block.
return self.stat_block.format(
"Player aggregate page rank:",
"<br>".join(player_rank_items))
def stat_player_citations_made(self):
"""Computes the total number of citations made BY each player."""
pages_cited_by_player = {
player: 0
for player in self.players}
# Add the number of citations from each authored article (not page).
for article_title, article in self.title_to_article.items():
if article.player is not None:
pages_cited_by_player[article.player] += len(article.citations)
# Reverse and itemize the counts.
player_cites_made_ranks = reverse_statistics_dict(pages_cited_by_player)
player_cites_made_items = itemize(player_cites_made_ranks)
# Format the statistics block.
return self.stat_block.format(
"Citations made by player:",
"<br>".join(player_cites_made_items))
def stat_player_citations_to(self):
"""Computes the total number of citations made TO each player's
authored pages."""
pages_cited_by_by_player = {
player: 0
for player in self.players}
# Add the number of citations made to each page (not article).
for page_title, articles in self.title_to_page.items():
page_author = articles[0].player
if page_author is not None:
pages_cited_by_by_player[page_author] += len(articles[0].citedby)
# Reverse and itemize the results.
cited_times_ranked = reverse_statistics_dict(pages_cited_by_by_player)
cited_times_items = itemize(cited_times_ranked)
# Format the statistics block.
return self.stat_block.format(
"Citations made to article by player:",
"<br>".join(cited_times_items))
def stat_bottom_pagerank(self):
"""Computes the bottom 10 pages by pagerank."""
self._try_populate_pagerank()
if not self.title_to_pagerank:
# If networkx was not successfully imported, skip the pagerank.
bot_ranked_items = "networkx must be installed to compute pageranks."
else:
# Phantoms have no pagerank, because they don't cite anything.
exclude = [
a.title
for a in self.articles
if a.player is None]
rank_by_written_only = {
k:v
for k,v in self.title_to_pagerank.items()
if k not in exclude}
# Reverse, enumerate, and itemize the bottom 10 by pagerank.
pageranks = reverse_statistics_dict(rank_by_written_only)
bot_ranked = list(enumerate(map(lambda x: x[1], pageranks), start=1))[-10:]
bot_ranked_items = itemize(bot_ranked)
# Format the statistics block.
return self.stat_block.format(
"Bottom 10 articles by page rank:",
"<br>".join(bot_ranked_items))
def stat_undercited(self):
"""Computes which articles have 0 or 1 citations made to them."""
undercited = {
page_title: len(articles[0].citedby)
for page_title, articles in self.title_to_page.items()
if len(articles[0].citedby) < 2}
undercited_items = itemize(reverse_statistics_dict(undercited))
return self.stat_block.format(
"Undercited articles:",
"<br>".join(undercited_items))

87
lexipython/utils.py Normal file
View File

@ -0,0 +1,87 @@
import os
import re
import io
from urllib import parse
import pkg_resources
# Short utility functions for handling titles
def titlecase(s):
"""
Capitalizes the first word.
"""
s = s.strip()
return s[:1].capitalize() + s[1:]
def titleescape(s):
"""
Makes an article title filename-safe.
"""
s = s.strip()
s = re.sub(r"\s+", '_', s) # Replace whitespace with _
s = re.sub(r"~", '-', s) # parse.quote doesn't catch ~
s = parse.quote(s) # Encode all other characters
s = re.sub(r"%", "", s) # Strip encoding %s
s = s[:64] # Limit to 64 characters
return s
def titlesort(s):
"""
Reduces titles down for sorting.
"""
s = s.lower()
if s.startswith("the "): return s[4:]
if s.startswith("an "): return s[3:]
if s.startswith("a "): return s[2:]
return s
# Load functions
def load_resource(filename, cache={}):
"""Loads files from the resources directory with caching."""
if filename not in cache:
binary = pkg_resources.resource_string("resources", filename)
unistr = binary.decode("utf-8")
cache[filename] = unistr
return cache[filename]
def parse_config_file(f):
"""Parses a Lexipython config file."""
config = {}
line = f.readline()
while line:
# Skim lines until a value definition begins
conf_match = re.match(r">>>([^>]+)>>>\s+", line)
if not conf_match:
line = f.readline()
continue
# Accumulate the conf value until the value ends
conf = conf_match.group(1)
conf_value = ""
line = f.readline()
conf_match = re.match(r"<<<{0}<<<\s+".format(conf), line)
while line and not conf_match:
conf_value += line
line = f.readline()
conf_match = re.match(r"<<<{0}<<<\s+".format(conf), line)
if not line:
raise EOFError("Reached EOF while reading config value {}".format(conf))
config[conf] = conf_value.strip()
return config
def load_config(name):
"""
Loads values from a Lexicon's config file.
"""
with open(os.path.join("lexicon", name, "lexicon.cfg"), "r", encoding="utf8") as f:
config = parse_config_file(f)
# Check that no values are missing that are present in the default config
with io.StringIO(load_resource("lexicon.cfg")) as f:
default_config = parse_config_file(f)
missing_keys = []
for key in default_config.keys():
if key not in config:
missing_keys.append(key)
if missing_keys:
raise KeyError("{} missing config values for: {}".format(name, " ".join(missing_keys)))
return config

View File

@ -1,211 +0,0 @@
import os
import sys
import re
import src.utils as utils
class LexiconArticle:
"""
A Lexicon article and its metadata.
Members:
player string: the player of the article
turn integer: the turn the article was written for
title string: the article title
title_filesafe string: the title, escaped, used for filenames
content string: the HTML content, with citations replaced by format hooks
citations dict mapping format hook string to tuple of link alias and link target title
wcites list: titles of written articles cited
pcites list: titles of phantom articles cited
citedby list: titles of articles that cite this
The last three are filled in by populate().
"""
def __init__(self, player, turn, title, content, citations):
"""
Creates a LexiconArticle object with the given parameters.
"""
self.player = player
self.turn = turn
self.title = title
self.title_filesafe = utils.titleescape(title)
self.content = content
self.citations = citations
self.wcites = set()
self.pcites = set()
self.citedby = set()
@staticmethod
def from_file_raw(raw_content):
"""
Parses the contents of a Lexipython source file into a LexiconArticle
object. If the source file is malformed, returns None.
"""
headers = raw_content.split('\n', 3)
if len(headers) != 4:
print("Header read error")
return None
player_header, turn_header, title_header, content_raw = headers
# Validate and sanitize the player header
if not player_header.startswith("# Player:"):
print("Player header missing or corrupted")
return None
player = player_header[9:].strip()
# Validate and sanitize the turn header
if not turn_header.startswith("# Turn:"):
print("Turn header missing or corrupted")
return None
turn = None
try:
turn = int(turn_header[7:].strip())
except:
print("Turn header error")
return None
# Validate and sanitize the title header
if not title_header.startswith("# Title:"):
print("Title header missing or corrupted")
return None
title = utils.titlecase(title_header[8:])
# Parse the content and extract citations
paras = re.split("\n\n+", content_raw.strip())
content = ""
citations = {}
format_id = 1
if not paras:
print("No content")
for para in paras:
# Escape angle brackets
para = re.sub("<", "&lt;", para)
para = re.sub(">", "&gt;", para)
# Escape curly braces
para = re.sub("{", "&#123;", para)
para = re.sub("}", "&#125;", para)
# Replace bold and italic marks with tags
para = re.sub(r"//([^/]+)//", r"<i>\1</i>", para)
para = re.sub(r"\*\*([^*]+)\*\*", r"<b>\1</b>", para)
# Replace \\LF with <br>LF
para = re.sub(r"\\\\\n", "<br>\n", para)
# Abstract citations into the citation record
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
while link_match:
# Identify the citation text and cited article
cite_text = link_match.group(2) if link_match.group(2) else link_match.group(3)
cite_title = utils.titlecase(re.sub(r"\s+", " ", link_match.group(3)))
# Record the citation
citations["c"+str(format_id)] = (cite_text, cite_title)
# Stitch the format id in place of the citation
para = para[:link_match.start(0)] + "{c"+str(format_id)+"}" + para[link_match.end(0):]
format_id += 1 # Increment to the next format citation
link_match = re.search(r"\[\[(([^|\[\]]+)\|)?([^|\[\]]+)\]\]", para)
# Convert signature to right-aligned
if para[:1] == '~':
para = "<hr><span class=\"signature\"><p>" + para[1:] + "</p></span>\n"
else:
para = "<p>" + para + "</p>\n"
content += para
return LexiconArticle(player, turn, title, content, citations)
@staticmethod
def parse_from_directory(directory):
"""
Reads and parses each source file in the given directory.
Input: directory, the path to the folder to read
Output: a list of parsed articles
"""
articles = []
print("Reading source files from", directory)
for filename in os.listdir(directory):
path = os.path.join(directory, filename)
# Read only .txt files
if filename[-4:] == ".txt":
print(" Parsing", filename)
with open(path, "r", encoding="utf8") as src_file:
raw = src_file.read()
article = LexiconArticle.from_file_raw(raw)
if article is None:
print(" ERROR")
else:
print(" success:", article.title)
articles.append(article)
return articles
@staticmethod
def populate(lexicon_articles):
"""
Given a list of lexicon articles, fills out citation information
for each article and creates phantom pages for missing articles.
"""
article_by_title = {article.title : article for article in lexicon_articles}
# Determine all articles that exist or should exist
extant_titles = set([citation[1] for article in lexicon_articles for citation in article.citations])
# Interlink all citations
for article in lexicon_articles:
for cite_tuple in article.citations.values():
target = cite_tuple[1]
# Create article objects for phantom citations
if target not in article_by_title:
article_by_title[target] = LexiconArticle(None, sys.maxsize, target,
"<p><i>This entry hasn't been written yet.</i></p>", {})
# Interlink citations
if article_by_title[target].player is None:
article.pcites.add(target)
else:
article.wcites.add(target)
article_by_title[target].citedby.add(article.title)
return list(article_by_title.values())
def build_default_contentblock(self):
"""
Formats citations into the article content as normal HTML links
and returns the result.
"""
format_map = {
format_id: "<a href=\"{1}.html\"{2}>{0}</a>".format(
cite_tuple[0], utils.titleescape(cite_tuple[1]),
"" if cite_tuple[1] in self.wcites else " class=\"phantom\"")
for format_id, cite_tuple in self.citations.items()
}
article_content = self.content.format(**format_map)
return "<div class=\"contentblock\">\n<h1>{}</h1>\n{}</div>\n".format(
self.title,
article_content)
def build_default_citeblock(self, prev_article, next_article):
"""
Builds the citeblock content HTML for use in regular article pages.
For each defined target, links the target page as Previous or Next.
"""
citeblock = "<div class=\"contentblock citeblock\">\n"
# Prev/next links:
if next_article is not None or prev_article is not None:
prev_link = ("<a href=\"{}.html\"{}>&#8592; Previous</a>".format(
prev_article.title_filesafe,
" class=\"phantom\"" if prev_article.player is None else "")
if prev_article is not None else "")
next_link = ("<a href=\"{}.html\"{}>Next &#8594;</a>".format(
next_article.title_filesafe,
" class=\"phantom\"" if next_article.player is None else "")
if next_article is not None else "")
citeblock += "<table><tr>\n<td>{}</td>\n<td>{}</td>\n</table></tr>\n".format(
prev_link, next_link)
# Citations
cites_links = [
"<a href=\"{1}.html\"{2}>{0}</a>".format(
title, utils.titleescape(title),
"" if title in self.wcites else " class=\"phantom\"")
for title in sorted(
self.wcites | self.pcites,
key=lambda t: utils.titlesort(t))]
cites_str = " / ".join(cites_links)
if len(cites_str) < 1: cites_str = "&mdash;"
citeblock += "<p>Citations: {}</p>\n".format(cites_str)
# Citedby
citedby_links = [
"<a href=\"{1}.html\">{0}</a>".format(
title, utils.titleescape(title))
for title in sorted(
self.citedby,
key=lambda t: utils.titlesort(t))]
citedby_str = " / ".join(citedby_links)
if len(citedby_str) < 1: citedby_str = "&mdash;"
citeblock += "<p>Cited by: {}</p>\n</div>\n".format(citedby_str)
return citeblock

View File

@ -1,418 +0,0 @@
import sys # For argv and stderr
import os # For reading directories
import re # For parsing lex content
import io # For writing pages out as UTF-8
import networkx # For pagerank analytics
from collections import defaultdict # For rank inversion in statistics
from src import utils
from src.article import LexiconArticle
class LexiconPage:
"""
An abstraction layer around formatting a Lexicon page skeleton with kwargs
so that kwargs that are constant across pages aren't repeated.
"""
def __init__(self, skeleton=None, page=None):
self.kwargs = {}
self.skeleton = skeleton
if page is not None:
self.skeleton = page.skeleton
self.kwargs = dict(page.kwargs)
def add_kwargs(self, **kwargs):
self.kwargs.update(kwargs)
def format(self, **kwargs):
total_kwargs = {**self.kwargs, **kwargs}
return self.skeleton.format(**total_kwargs)
def build_contents_page(page, articles, index_list):
"""
Builds the full HTML of the contents page.
"""
content = "<div class=\"contentblock\">"
# Head the contents page with counts of written and phantom articles
phantom_count = len([article for article in articles if article.player is None])
if phantom_count == 0:
content += "<p>There are <b>{0}</b> entries in this lexicon.</p>\n".format(len(articles))
else:
content += "<p>There are <b>{0}</b> entries, <b>{1}</b> written and <b>{2}</b> phantom.</p>\n".format(
len(articles), len(articles) - phantom_count, phantom_count)
# Prepare article links
link_by_title = {article.title : "<a href=\"../article/{1}.html\"{2}>{0}</a>".format(
article.title, article.title_filesafe,
" class=\"phantom\"" if article.player is None else "")
for article in articles}
# Write the articles in alphabetical order
content += utils.load_resource("contents.html")
content += "<div id=\"index-order\" style=\"display:none\">\n<ul>\n"
indices = index_list.split("\n")
alphabetical_order = sorted(
articles,
key=lambda a: utils.titlesort(a.title))
check_off = list(alphabetical_order)
for index_str in indices:
content += "<h3>{0}</h3>\n".format(index_str)
for article in alphabetical_order:
if (utils.titlesort(article.title)[0].upper() in index_str):
check_off.remove(article)
content += "<li>{}</li>\n".format(link_by_title[article.title])
if len(check_off) > 0:
content += "<h3>&c.</h3>\n"
for article in check_off:
content += "<li>{}</li>\n".format(link_by_title[article.title])
content += "</ul>\n</div>\n"
# Write the articles in turn order
content += "<div id=\"turn-order\" style=\"display:none\">\n<ul>\n"
turn_numbers = [article.turn for article in articles if article.player is not None]
first_turn, last_turn = min(turn_numbers), max(turn_numbers)
turn_order = sorted(
articles,
key=lambda a: (a.turn, utils.titlesort(a.title)))
check_off = list(turn_order)
for turn_num in range(first_turn, last_turn + 1):
content += "<h3>Turn {0}</h3>\n".format(turn_num)
for article in turn_order:
if article.turn == turn_num:
check_off.remove(article)
content += "<li>{}</li>\n".format(link_by_title[article.title])
if len(check_off) > 0:
content += "<h3>Unwritten</h3>\n"
for article in check_off:
content += "<li>{}</li>\n".format(link_by_title[article.title])
content += "</ul>\n</div>\n"
# Fill in the page skeleton
return page.format(title="Index", content=content)
def build_rules_page(page):
"""
Builds the full HTML of the rules page.
"""
content = utils.load_resource("rules.html")
# Fill in the entry skeleton
return page.format(title="Rules", content=content)
def build_formatting_page(page):
"""
Builds the full HTML of the formatting page.
"""
content = utils.load_resource("formatting.html")
# Fill in the entry skeleton
return page.format(title="Formatting", content=content)
def build_session_page(page, session_content):
"""
Builds the full HTML of the session page.
"""
# Fill in the entry skeleton
content = "<div class=\"contentblock\">{}</div>".format(session_content)
return page.format(title="Session", content=content)
def reverse_statistics_dict(stats, reverse=True):
"""
Transforms a dictionary mapping titles to a value into a list of values
and lists of titles. The list is sorted by the value, and the titles are
sorted alphabetically.
"""
rev = {}
for key, value in stats.items():
if value not in rev:
rev[value] = []
rev[value].append(key)
for key, value in rev.items():
rev[key] = sorted(value, key=lambda t: utils.titlesort(t))
return sorted(rev.items(), key=lambda x:x[0], reverse=reverse)
def itemize(stats_list):
return map(lambda x: "{0} &ndash; {1}".format(x[0], "; ".join(x[1])), stats_list)
def build_statistics_page(page, articles):
"""
Builds the full HTML of the statistics page.
"""
content = ""
cite_map = {
article.title : [
cite_tuple[1]
for cite_tuple
in article.citations.values()
]
for article in articles}
# Top pages by pagerank
# Compute pagerank for each article
G = networkx.Graph()
for citer, citeds in cite_map.items():
for cited in citeds:
G.add_edge(citer, cited)
rank_by_article = networkx.pagerank(G)
# Get the top ten articles by pagerank
top_pageranks = reverse_statistics_dict(rank_by_article)[:10]
# Replace the pageranks with ordinals
top_ranked = enumerate(map(lambda x: x[1], top_pageranks), start=1)
# Format the ranks into strings
top_ranked_items = itemize(top_ranked)
# Write the statistics to the page
content += "<div class=\"contentblock\">\n"
content += "<u>Top 10 pages by page rank:</u><br>\n"
content += "<br>\n".join(top_ranked_items)
content += "</div>\n"
# Top number of citations made
citations_made = { title : len(cites) for title, cites in cite_map.items() }
top_citations = reverse_statistics_dict(citations_made)[:3]
top_citations_items = itemize(top_citations)
content += "<div class=\"contentblock\">\n"
content += "<u>Most citations made from:</u><br>\n"
content += "<br>\n".join(top_citations_items)
content += "</div>\n"
# Top number of times cited
# Build a map of what cites each article
all_cited = set([title for citeds in cite_map.values() for title in citeds])
cited_by_map = {
cited: [
citer
for citer in cite_map.keys()
if cited in cite_map[citer]]
for cited in all_cited }
# Compute the number of citations to each article
citations_to = { title : len(cites) for title, cites in cited_by_map.items() }
top_cited = reverse_statistics_dict(citations_to)[:3]
top_cited_items = itemize(top_cited)
content += "<div class=\"contentblock\">\n"
content += "<u>Most citations made to:</u><br>\n"
content += "<br>\n".join(top_cited_items)
content += "</div>\n"
# Top article length, roughly by words
article_length = {}
for article in articles:
format_map = {
format_id: cite_tuple[0]
for format_id, cite_tuple in article.citations.items()
}
plain_content = article.content.format(**format_map)
wordcount = len(plain_content.split())
article_length[article.title] = wordcount
top_length = reverse_statistics_dict(article_length)[:3]
top_length_items = itemize(top_length)
content += "<div class=\"contentblock\">\n"
content += "<u>Longest article:</u><br>\n"
content += "<br>\n".join(top_length_items)
content += "</div>\n"
# Total word count
content += "<div class=\"contentblock\">\n"
content += "<u>Total word count:</u><br>\n"
content += str(sum(article_length.values()))
content += "</div>\n"
# Player pageranks
players = sorted(set([article.player for article in articles if article.player is not None]))
articles_by_player = {
player : [
a
for a in articles
if a.player == player]
for player in players}
pagerank_by_player = {
player : round(
sum(map(
lambda a: rank_by_article[a.title] if a.title in rank_by_article else 0,
articles)),
3)
for player, articles
in articles_by_player.items()}
player_rank = reverse_statistics_dict(pagerank_by_player)
player_rank_items = itemize(player_rank)
content += "<div class=\"contentblock\">\n"
content += "<u>Player total page rank:</u><br>\n"
content += "<br>\n".join(player_rank_items)
content += "</div>\n"
# Player citations made
player_cite_count = {
player : sum(map(lambda a:len(a.wcites | a.pcites), articles))
for player, articles in articles_by_player.items()}
player_cites_made_ranks = reverse_statistics_dict(player_cite_count)
player_cites_made_items = itemize(player_cites_made_ranks)
content += "<div class=\"contentblock\">\n"
content += "<u>Citations made by player</u><br>\n"
content += "<br>\n".join(player_cites_made_items)
content += "</div>\n"
# Player cited count
cited_times = {player : 0 for player in players}
for article in articles:
if article.player is not None:
cited_times[article.player] += len(article.citedby)
cited_times_ranked = reverse_statistics_dict(cited_times)
cited_times_items = itemize(cited_times_ranked)
content += "<div class=\"contentblock\">\n"
content += "<u>Citations made to player</u><br>\n"
content += "<br>\n".join(cited_times_items)
content += "</div>\n"
# Fill in the entry skeleton
return page.format(title="Statistics", content=content)
def build_graphviz_file(cite_map):
"""
Builds a citation graph in dot format for Graphviz.
"""
result = []
result.append("digraph G {\n")
# Node labeling
written_entries = list(cite_map.keys())
phantom_entries = set([title for cites in cite_map.values() for title in cites if title not in written_entries])
node_labels = [title[:20] for title in written_entries + list(phantom_entries)]
node_names = [hash(i) for i in node_labels]
for i in range(len(node_labels)):
result.append("{} [label=\"{}\"];\n".format(node_names[i], node_labels[i]))
# Edges
for citer in written_entries:
for cited in cite_map[citer]:
result.append("{}->{};\n".format(hash(citer[:20]), hash(cited[:20])))
# Return result
result.append("overlap=false;\n}\n")
return "".join(result)#"…"
def build_compiled_page(articles, config):
"""
Builds a page compiling all articles in the Lexicon.
"""
# Sort by turn and title
turn_order = sorted(
articles,
key=lambda a: (a.turn, utils.titlesort(a.title)))
# Build the content of each article
css = utils.load_resource("lexicon.css")
css += "\n"\
"body { background: #ffffff; }\n"\
"sup { vertical-align: top; font-size: 0.6em; }\n"
content = "<html>\n"\
"<head>\n"\
"<title>{lexicon}</title>\n"\
"<style>\n"\
"{css}\n"\
"</style>\n"\
"<body>\n"\
"<h1>{lexicon}</h1>".format(
lexicon=config["LEXICON_TITLE"],
css=css)
for article in turn_order:
# Stitch in superscripts for citations
format_map = {
format_id: "{}<sup>{}</sup>".format(cite_tuple[0], format_id[1:])
for format_id, cite_tuple in article.citations.items()
}
article_body = article.content.format(**format_map)
# Stitch a page-break-avoid div around the header and first paragraph
article_body = article_body.replace("</p>", "</p></div>", 1)
# Append the citation block
cite_list = "<br>\n".join(
"{}. {}\n".format(format_id[1:], cite_tuple[1])
for format_id, cite_tuple in sorted(
article.citations.items(),
key=lambda t:int(t[0][1:])))
cite_block = "" if article.player is None else ""\
"<p><i>Citations:</i><br>\n"\
"{}\n</p>".format(cite_list)
article_block = "<div style=\"page-break-inside:avoid;\">\n"\
"<h2>{}</h2>\n"\
"{}\n"\
"{}\n".format(article.title, article_body, cite_block)
content += article_block
content += "</body></html>"
return content
def build_all(path_prefix, lexicon_name):
"""
Builds all browsable articles and pages in the Lexicon.
"""
lex_path = os.path.join(path_prefix, lexicon_name)
# Load the Lexicon's peripherals
config = utils.load_config(lexicon_name)
page_skeleton = utils.load_resource("page-skeleton.html")
page = LexiconPage(skeleton=page_skeleton)
page.add_kwargs(
lexicon=config["LEXICON_TITLE"],
logo=config["LOGO_FILENAME"],
prompt=config["PROMPT"],
sort=config["DEFAULT_SORT"])
# Parse the written articles
articles = LexiconArticle.parse_from_directory(os.path.join(lex_path, "src"))
# Once they've been populated, the articles list has the titles of all articles
# Sort this by turn before title so prev/next links run in turn order
articles = sorted(
LexiconArticle.populate(articles),
key=lambda a: (a.turn, utils.titlesort(a.title)))
def pathto(*els):
return os.path.join(lex_path, *els)
# Write the redirect page
print("Writing redirect page...")
with open(pathto("index.html"), "w", encoding="utf8") as f:
f.write(utils.load_resource("redirect.html").format(
lexicon=config["LEXICON_TITLE"], sort=config["DEFAULT_SORT"]))
# Write the article pages
print("Deleting old article pages...")
for filename in os.listdir(pathto("article")):
if filename[-5:] == ".html":
os.remove(pathto("article", filename))
print("Writing article pages...")
l = len(articles)
for idx in range(l):
article = articles[idx]
with open(pathto("article", article.title_filesafe + ".html"), "w", encoding="utf-8") as f:
contentblock = article.build_default_contentblock()
citeblock = article.build_default_citeblock(
None if idx == 0 else articles[idx - 1],
None if idx == l-1 else articles[idx + 1])
article_html = page.format(
title = article.title,
content = contentblock + citeblock)
f.write(article_html)
print(" Wrote " + article.title)
# Write default pages
print("Writing default pages...")
with open(pathto("contents", "index.html"), "w", encoding="utf-8") as f:
f.write(build_contents_page(page, articles, config["INDEX_LIST"]))
print(" Wrote Contents")
with open(pathto("rules", "index.html"), "w", encoding="utf-8") as f:
f.write(build_rules_page(page))
print(" Wrote Rules")
with open(pathto("formatting", "index.html"), "w", encoding="utf-8") as f:
f.write(build_formatting_page(page))
print(" Wrote Formatting")
with open(pathto("session", "index.html"), "w", encoding="utf-8") as f:
f.write(build_session_page(page, config["SESSION_PAGE"]))
print(" Wrote Session")
with open(pathto("statistics", "index.html"), "w", encoding="utf-8") as f:
f.write(build_statistics_page(page, articles))
print(" Wrote Statistics")
# Write auxiliary pages
if "PRINTABLE_FILE" in config and config["PRINTABLE_FILE"]:
with open(pathto(config["PRINTABLE_FILE"]), "w", encoding="utf-8") as f:
f.write(build_compiled_page(articles, config))
print(" Wrote compiled page to " + config["PRINTABLE_FILE"])
# Check that authors aren't citing themselves
print("Running citation checks...")
article_by_title = {article.title : article for article in articles}
for article in articles:
for _, tup in article.citations.items():
cited = article_by_title[tup[1]]
if article.player == cited.player:
print(" {2}: {0} cites {1}".format(article.title, cited.title, cited.player))
print()

View File

@ -1,29 +0,0 @@
<script type="text/javascript">
contentsToggle = function() {
var b = document.getElementById("toggle-button");
var i = document.getElementById("index-order");
var t = document.getElementById("turn-order");
if (t.style.display == "none") {
i.style.display = "none";
t.style.display = "block";
b.innerText = "Switch to index order";
} else {
i.style.display = "block";
t.style.display = "none";
b.innerText = "Switch to turn order";
}
}
window.onload = function(){
if (location.search.search("byturn") > 0)
{
document.getElementById("turn-order").style.display = "block";
document.getElementById("toggle-button").innerText = "Switch to index order";
}
if (location.search.search("byindex") > 0)
{
document.getElementById("index-order").style.display = "block";
document.getElementById("toggle-button").innerText = "Switch to turn order";
}
}
</script>
<button id="toggle-button" onClick="javascript:contentsToggle()">Switch to turn order</button>

View File

@ -1,61 +0,0 @@
# LEXIPYTHON CONFIG FILE
#
# This file defines the configuration values for an instance of Lexipython.
# Configuration values are written as:
>>>CONFIG_NAME>>>
value
<<<CONFIG_NAME<<<
# The above defines a config value named CONFIG_NAME with a value of "value".
# The title of the Lexicon game, displayed at the top of each entry.
>>>LEXICON_TITLE>>>
Lexicon Title
<<<LEXICON_TITLE<<<
# The sidebar image. Constrained to 140-px.
>>>LOGO_FILENAME>>>
logo.png
<<<LOGO_FILENAME<<<
# The prompt for the Lexicon. Will be read as HTML and inserted into the
# header directly.
>>>PROMPT>>>
<i>Prompt goes here</i>
<<<PROMPT<<<
# Session page content. Will be read as HTML and inserted into the body of
# the session page directly.
>>>SESSION_PAGE>>>
<p>Put session information here, like the index grouping and turn count, where to send completed entries, index assignments, turn schedule, and so on.</p>
<<<SESSION_PAGE<<<
# Index headers. An index is a string of characters, which are the characters
# an entry has to begin with to fall under that index. Indices will be listed
# in the order written. Entries will be put into the first index they match.
# Leftover entries will be listed under "&c." at the end.
>>>INDEX_LIST>>>
ABC
DEF
GHI
JKL
MNO
PQRS
TUV
WXYZ
<<<INDEX_LIST<<<
# The default sorting to use on the contents page.
# Allowed values are "?byturn" and "?byindex".
>>>DEFAULT_SORT>>>
?byturn
<<<DEFAULT_SORT<<<
# Graphviz file name. If present, the graph of page citations will be written
# in the dot file format.
>>>GRAPHVIZ_FILE>>>
<<<GRAPHVIZ_FILE<<<
# Print version file name. If present, the lexicon will be compiled and written
# into a single print-ready HTML file.
>>>PRINTABLE_FILE>>>
<<<PRINTABLE_FILE<<<

View File

@ -1,35 +0,0 @@
body { background-color: #eeeeee; line-height: 1.4; font-size: 16px; }
div#wrapper { max-width: 800px; position: absolute; left: 0; right: 0;
margin: 0 auto; }
div#header { padding: 5px; margin: 5px; background-color: #ffffff;
box-shadow: 2px 2px 10px #888888; border-radius: 5px; }
div#header p, div#header h2 { margin: 5px; }
div#sidebar { width: 200px; float:left; margin:5px; padding: 8px;
text-align: center; background-color: #ffffff;
box-shadow: 2px 2px 10px #888888; border-radius: 5px; }
img#logo { max-width: 200px; }
table { table-layout: fixed; width: 100%; }
div#sidebar table { border-collapse: collapse; }
div.citeblock table td:first-child + td a { justify-content: flex-end; }
table a { display: flex; padding: 3px; background-color: #dddddd;
border-radius: 5px; text-decoration: none; }
div#sidebar table a { justify-content: center; }
table a:hover { background-color: #cccccc; }
div#sidebar table td { padding: 0px; margin: 3px 0;
border-bottom: 8px solid transparent; }
div#content { position: absolute; right: 0px; left: 226px; max-width: 564px;
margin: 5px; }
div.contentblock { background-color: #ffffff; box-shadow: 2px 2px 10px #888888;
margin-bottom: 5px; padding: 10px; width: auto; border-radius: 5px; }
a.phantom { color: #cc2200; }
div.citeblock a.phantom { font-style: italic; }
span.signature { text-align: right; }
@media only screen and (max-width: 816px) {
div#wrapper { padding: 5px; }
div#header { max-width: 554; margin: 0 auto; }
div#sidebar { max-width: 548; width: inherit; float: inherit;
margin: 5px auto; }
div#content { max-width: 564; position: static; right: inherit;
margin: 5px auto; }
img#logo { max-width: inherit; width: 100%; }
}

View File

@ -1,75 +0,0 @@
import os
import re
from urllib import parse
# Short utility functions for handling titles
def titlecase(s):
"""
Capitalizes the first word.
"""
s = s.strip()
return s[:1].capitalize() + s[1:]
def titleescape(s):
"""
Makes an article title filename-safe.
"""
s = s.strip()
s = re.sub(r"\s+", '_', s) # Replace whitespace with _
s = parse.quote(s) # Encode all other characters
s = re.sub(r"%", "", s) # Strip encoding %s
s = s[:64] # Limit to 64 characters
return s
def titlesort(s):
"""
Reduces titles down for sorting.
"""
s = s.lower()
if s.startswith("the "): return s[4:]
if s.startswith("an "): return s[3:]
if s.startswith("a "): return s[2:]
return s
# Load functions
def load_resource(filename, cache={}):
"""Loads files from the resources directory with caching."""
if filename not in cache:
with open(os.path.join("src", "resources", filename), "r", encoding="utf-8") as f:
cache[filename] = f.read()
return cache[filename]
def load_config(name):
"""
Loads values from a Lexicon's config file.
"""
config = {}
with open(os.path.join("lexicon", name, "lexicon.cfg"), "r", encoding="utf8") as f:
line = f.readline()
while line:
# Skim lines until a value definition begins
conf_match = re.match(r">>>([^>]+)>>>\s+", line)
if not conf_match:
line = f.readline()
continue
# Accumulate the conf value until the value ends
conf = conf_match.group(1)
conf_value = ""
line = f.readline()
conf_match = re.match(r"<<<{0}<<<\s+".format(conf), line)
while line and not conf_match:
conf_value += line
line = f.readline()
conf_match = re.match(r"<<<{0}<<<\s+".format(conf), line)
if not line:
# TODO Not this
raise SystemExit("Reached EOF while reading config value {}".format(conf))
config[conf] = conf_value.strip()
# Check that all necessary values were configured
for config_value in ['LEXICON_TITLE', 'PROMPT', 'SESSION_PAGE', "INDEX_LIST"]:
if config_value not in config:
# TODO Not this either
raise SystemExit("Error: {} not set in lexipython.cfg".format(config_value))
return config