Compare commits
8 Commits
syndicatio
...
master
Author | SHA1 | Date | |
---|---|---|---|
575bf336f9 | |||
|
bb0d20bd09 | ||
|
7fba705bc6 | ||
2d302bfec9 | |||
f23a166c3d | |||
3607295b9e | |||
af589847e7 | |||
332863fb30 |
41
README.md
41
README.md
@ -5,13 +5,18 @@ Simple and stylish text-to-html microblog generator.
|
||||
|
||||
## Requirements
|
||||
|
||||
python3 dateutil toml make curl pycurl urllib
|
||||
The following python modules are used within the repository.
|
||||
|
||||
* `dateutil`, `toml` are Python modules.
|
||||
* `make` (optional), method for invoking the script.
|
||||
* `curl`, `pycurl` and `urllib` (optional), for uploading multiple files to neocities (`neouploader.py`).
|
||||
toml tomlkit python_dateutil pycurl
|
||||
|
||||
### Usage
|
||||
* `tomlkit` (optional), for maintaining the configuration file between updates (`check-settings.py`).
|
||||
|
||||
Some Gnu core utilities are expected to be present but can be substituted for other means.
|
||||
|
||||
* `make` (optional), to invoke the script using Makefiles
|
||||
* `date` (optional), to generate timestamps when writing posts
|
||||
|
||||
## Usage
|
||||
|
||||
The following generates a sample page `result.html`.
|
||||
|
||||
@ -24,16 +29,12 @@ Using `make` is uptional; it does the following within a new directory:
|
||||
cp example/timeline.css ./timeline.css
|
||||
cp example/default.tpl ./template.tpl
|
||||
cp example/demo.txt ./content.txt
|
||||
python microblog.py ./template.tpl ./content.txt > result.html
|
||||
python src/microblog.py ./template.tpl ./content.txt > result.html
|
||||
|
||||
This script generate a text file after operation.
|
||||
|
||||
* `updatedfiles.txt`, a list of files updated by the script for use in automated uploads.
|
||||
|
||||
## Configuration
|
||||
|
||||
Settings are read from `settings.toml`. See `example/settings.toml`.
|
||||
|
||||
### Writing Content
|
||||
|
||||
See `example/demo.txt`.
|
||||
@ -56,6 +57,26 @@ The content file is a plain text file of posts. Each post has two types of infor
|
||||
* the two last lines of the file must be empty
|
||||
* html can be placed in the message for embedded videos and rich text
|
||||
|
||||
## Configuration
|
||||
|
||||
Settings are read from `settings.toml`. See `example/settings.toml`.
|
||||
|
||||
Configuration options as understood by the script are tentative and may change in the future.
|
||||
|
||||
### A key may be missing from your settings file (KeyError)
|
||||
|
||||
>I'm getting KeyError when I run the program
|
||||
|
||||
>This script is throwing KeyError after I ran git pull
|
||||
|
||||
In most cases, this means I added new configuration options. You can resolve this error by adding missing keys from `example/settings.toml` to `settings.toml`.
|
||||
|
||||
The following command can check for missing keys and update if needed.
|
||||
|
||||
python src/check-settings.py
|
||||
|
||||
Missing keys if any are initialized to default values from `example/settings.toml`.
|
||||
|
||||
## Anything else
|
||||
|
||||
This is a script I wrote for personal use. The output can be seen on [https://likho.neocities.org/microblog/index.html](https://likho.neocities.org/microblog/index.html). I figure someone else may want to use it for their own personal websites, so it is published.
|
||||
|
@ -1,22 +1,27 @@
|
||||
|
||||
|
||||
all: template.tpl content.txt timeline.css
|
||||
python microblog.py ./template.tpl ./content.txt > result.html
|
||||
all: demo tpl css settings
|
||||
python src/microblog.py ./template.tpl ./content.txt > result.html
|
||||
|
||||
# for people who don't want to read the README
|
||||
# and want to hit `make` to see how things work.
|
||||
template.tpl:
|
||||
check:
|
||||
python src/check-settings.py
|
||||
|
||||
# first time run only
|
||||
tpl:
|
||||
cp ./example/default.tpl ./template.tpl
|
||||
|
||||
timeline.css:
|
||||
css:
|
||||
cp ./example/timeline.css ./timeline.css
|
||||
|
||||
content.txt:
|
||||
demo:
|
||||
cp ./example/demo.txt ./content.txt
|
||||
|
||||
settings:
|
||||
cp ./example/settings.toml ./settings.toml
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm ./pages/*.html
|
||||
rm ./tags/*/*.html
|
||||
rm lastfullpage.txt
|
||||
rmdir ./pages ./tags/* ./tags
|
||||
rm ./webring/*.html
|
||||
rmdir ./pages ./tags/* ./tags ./webring
|
||||
|
@ -1,42 +1,44 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta content="initial-scale=1.0">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Microblog</title>
|
||||
<link href="./style.css" rel="stylesheet" type="text/css" media="all">
|
||||
|
||||
<!-- <link href="./style.css" rel="stylesheet" type="text/css" media="all"> -->
|
||||
<link href="./timeline.css" rel="stylesheet" type="text/css" media="all">
|
||||
</head>
|
||||
<body>
|
||||
<div class="content">
|
||||
|
||||
<header>
|
||||
<h1>A Microblog in Plain HTML</h1>
|
||||
</header>
|
||||
|
||||
<h1>A Microblog in Plain HTML</h1>
|
||||
<aside class="column profile">
|
||||
<figure>
|
||||
<img src="images/avatar.jpg" alt="(Avatar)" class="avatar">
|
||||
<span>Your Name Here</span>
|
||||
</figure>
|
||||
<p>
|
||||
<a href="mailto:user@host.tld">user@host.tld</a>
|
||||
</p>
|
||||
<h2>About Me</h2>
|
||||
<p>Your self-description here.</p>
|
||||
<p>{postcount} total posts</p>
|
||||
<h3>Tags</h3>
|
||||
<nav>{tags}</nav>
|
||||
<h3>Pages</h3>
|
||||
<nav>{pages}</nav>
|
||||
</aside>
|
||||
|
||||
<div class = "row"> <div class = "column">
|
||||
<div class="profile">
|
||||
<img src="./images/avatar.jpg" alt="Avatar" class="avatar">
|
||||
<span class="handle">Your Name Here</span>
|
||||
<p><span class="email"><a href="mailto:user@host.tld">user@host.tld</a></span></p>
|
||||
<div class="bio">Description
|
||||
<h4>{postcount} total posts</h4>
|
||||
<h3>Tags</h3>
|
||||
<p>{tags}</p>
|
||||
<h3>Pages</h3>
|
||||
<p>{pages}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class = "timeline">
|
||||
<main class="timeline">
|
||||
{timeline}
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
|
||||
<center>
|
||||
<a href="https://notabug.org/likho/microblog.py">microblog.py</a>
|
||||
</center>
|
||||
<footer>
|
||||
<a href="https://notabug.org/likho/microblog.py">microblog.py</a>
|
||||
</footer>
|
||||
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
@ -1,26 +1,39 @@
|
||||
latestpage="result.html"
|
||||
# latestpage="result.html"
|
||||
latestpages=["meta.json", "result.html"]
|
||||
|
||||
[page]
|
||||
postsperpage = 20
|
||||
relative_css=["./style.css", "./timeline.css"]
|
||||
# this would be "latest.html" in earlier versions i.e
|
||||
# user.domain.tld/microblog/tags/tagname/latest.html
|
||||
# naming it as index enables paths like so
|
||||
# user.domain.tld/microblog/tags/tagname
|
||||
landing_page="index.html"
|
||||
|
||||
[post]
|
||||
accepted_images= ["jpg", "JPG", "png", "PNG"]
|
||||
# true = add <p></p> tags to each line.
|
||||
tag_paragraphs=true
|
||||
# adds <br> or user defined string between each line
|
||||
# line_separator="<br>"
|
||||
# apply <p> tags even if a line contains the following
|
||||
inline_tags = ["i", "em", "b", "strong","u", "s", "a", "span"]
|
||||
date_format="%d %B %Y"
|
||||
format="""
|
||||
<div class="postcell" id="{__num__}">
|
||||
<div class="timestamp">{__timestamp__}
|
||||
<article id="{__num__}">
|
||||
<h4>
|
||||
<time>{__timestamp__}</time>
|
||||
<a href=#{__num__}>(#{__num__})</a>
|
||||
</div>
|
||||
<div class="message">{__msg__}</div>
|
||||
</h4>
|
||||
{__msg__}
|
||||
{__btn__}
|
||||
</div>
|
||||
</article>
|
||||
"""
|
||||
|
||||
[post.buttons]
|
||||
format="""
|
||||
<a class="buttons" href="{__url__}">{__label__}</a>
|
||||
"""
|
||||
|
||||
[post.buttons.links]
|
||||
reply = "mailto:user@host.tld"
|
||||
test = "https://toml.io/en/v1.0.0#array-of-tables"
|
||||
interact = "https://yoursite.tld/cgi?postid="
|
||||
@ -28,3 +41,38 @@ interact = "https://yoursite.tld/cgi?postid="
|
||||
[post.gallery]
|
||||
path_to_thumb="./thumbs"
|
||||
path_to_fullsize="./images"
|
||||
|
||||
[webring]
|
||||
enabled=false
|
||||
file_output="meta.json"
|
||||
|
||||
[webring.profile]
|
||||
username="Your name here"
|
||||
url="https://yourdomain.tld/microblog/"
|
||||
avatar="https://yourdomain.tld/microblog/images/avatar.jpg"
|
||||
short-bio= "Your self-description. Anything longer than 150 characters is truncated."
|
||||
|
||||
[webring.following]
|
||||
list= ["https://likho.neocities.org/microblog/meta.json"]
|
||||
date_format = "%Y %b %d"
|
||||
format="""
|
||||
<article>
|
||||
<figure>
|
||||
<img src="{__avatar__}" alt="Avatar" class="avatar">
|
||||
<figcaption>
|
||||
<ul>
|
||||
<li><a href="{__url__}" title="microblog of {__handle__}">{__handle__}</a></li>
|
||||
<li><time>Last Update: {__lastupdated__}</time></li>
|
||||
<li>Posts: {__post_count__}</li>
|
||||
</ul>
|
||||
</figcaption>
|
||||
</figure>
|
||||
<p class="short-bio">{__shortbio__}</p>
|
||||
</article>
|
||||
"""
|
||||
|
||||
# internally link avatars - avoids hotlinks
|
||||
[webring.following.internal-avatars]
|
||||
enabled=false
|
||||
path_to_avatars="/microblog/avatars" # link rendered on page
|
||||
local_path_to_avatars="./avatars" # destination folder on pc
|
||||
|
@ -1,60 +1,76 @@
|
||||
|
||||
body {
|
||||
max-width:95%;
|
||||
margin:auto;
|
||||
}
|
||||
|
||||
@media only screen and (min-width: 768px) {
|
||||
.column {
|
||||
float: left;
|
||||
width: 32%;
|
||||
width: 30%;
|
||||
}
|
||||
.timeline {
|
||||
float: right;
|
||||
width: 67%;
|
||||
}
|
||||
}
|
||||
.postcell {
|
||||
|
||||
/* POSTING */
|
||||
|
||||
/* .postcell */
|
||||
.timeline article {
|
||||
border: 1px solid red;
|
||||
text-align: left;
|
||||
margin: 0.25em 0
|
||||
}
|
||||
.message {
|
||||
.timeline article h4 {
|
||||
text-align: right;
|
||||
margin: 0.5em
|
||||
}
|
||||
.timeline article h4 ~ * {
|
||||
margin: 1em 1em 1em 3em;
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
.buttons {
|
||||
margin-left: 1em;
|
||||
margin-bottom:0.5em;
|
||||
}
|
||||
.timestamp {
|
||||
text-align: right;
|
||||
margin: 0.5em
|
||||
}
|
||||
.hashtag {
|
||||
color: green;
|
||||
font-weight: bold;
|
||||
}
|
||||
.profile {
|
||||
vertical-align: middle;
|
||||
padding-left: 10px;
|
||||
border:1px solid blue;
|
||||
|
||||
/* PROFILE */
|
||||
.column figure {
|
||||
margin-left: 3%;
|
||||
}
|
||||
.avatar {
|
||||
vertical-align: middle;
|
||||
width: 50px;
|
||||
height: 50px;
|
||||
}
|
||||
.handle{
|
||||
.column {
|
||||
border:1px solid blue;
|
||||
padding-left: 10px;
|
||||
padding:1%;
|
||||
}
|
||||
.profile .handle{
|
||||
font-size: 1.1em;
|
||||
font-weight: bold;
|
||||
}
|
||||
.email{
|
||||
text-align:left;
|
||||
.profile .email{
|
||||
font-size: 0.8em;
|
||||
text-align:left;
|
||||
text-decoration:none;
|
||||
}
|
||||
.bio {
|
||||
vertical-align: middle;
|
||||
.profile .bio {
|
||||
font-size: 0.9em;
|
||||
vertical-align: middle;
|
||||
margin: 1em
|
||||
}
|
||||
|
||||
/* IMAGES */
|
||||
|
||||
.gallery {
|
||||
margin:auto;
|
||||
display: flex;
|
||||
@ -73,9 +89,41 @@
|
||||
border: 1px solid #777;
|
||||
filter: invert(100%);
|
||||
}
|
||||
/* Clear floats after the columns */
|
||||
.row:after {
|
||||
content: "";
|
||||
display: table;
|
||||
clear: both;
|
||||
|
||||
/* WEBRING */
|
||||
|
||||
.timeline article figure img {
|
||||
margin-left:3%;
|
||||
margin-top:2%;
|
||||
height: 4em;
|
||||
width:auto;
|
||||
vertical-align:top;
|
||||
}
|
||||
.timeline article figure {
|
||||
display:flex;
|
||||
margin-left:0;
|
||||
}
|
||||
.timeline article figcaption {
|
||||
margin-left: 3%;
|
||||
display: inline-block;
|
||||
font-size: 0.85em;
|
||||
}
|
||||
.timeline article figcaption ul {
|
||||
list-style-type:none;
|
||||
padding-left:0;
|
||||
}
|
||||
.timeline article figcaption p {
|
||||
margin-top:0;
|
||||
margin-bottom:0;
|
||||
}
|
||||
.timeline article .short-bio{
|
||||
padding-left: 3%;
|
||||
padding-right: 2%;
|
||||
font-style: italic;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
footer {
|
||||
text-align:center;
|
||||
}
|
||||
|
||||
|
395
microblog.py
395
microblog.py
@ -1,395 +0,0 @@
|
||||
|
||||
import sys, os, traceback
|
||||
import dateutil.parser
|
||||
|
||||
# returns html-formatted string
|
||||
def make_buttons(btn_dict, msg_id):
|
||||
buttons = "<div class=\"buttons\">"
|
||||
fmt = "<a href=\"%s\">[%s]</a>"
|
||||
for key in btn_dict:
|
||||
url = btn_dict[key]
|
||||
if url[-1] == '=':
|
||||
# then interpret it as a query string
|
||||
url += str(msg_id)
|
||||
buttons += fmt % (url,key)
|
||||
buttons += "</div>"
|
||||
return buttons
|
||||
|
||||
# apply div classes for use with .css
|
||||
def make_post(num, timestamp, conf, msg):
|
||||
fmt = conf["format"]
|
||||
if "buttons" in conf:
|
||||
b = make_buttons(conf["buttons"], num)
|
||||
else:
|
||||
b = ""
|
||||
return fmt.format(
|
||||
__timestamp__=timestamp, __num__=num, __msg__=msg, __btn__=b)
|
||||
|
||||
def make_gallery(indices, w, conf=None):
|
||||
tag = []
|
||||
if indices == []:
|
||||
return tag
|
||||
template = '''
|
||||
<div class=\"panel\">
|
||||
<a href=\"%s\"><img src=\"%s\" class=\"embed\"></a>
|
||||
</div>
|
||||
'''
|
||||
tag.append("<div class=\"gallery\">")
|
||||
for index in reversed(indices):
|
||||
image = w.pop(index)
|
||||
is_path = image[0] == '.' or image[0] == '/'
|
||||
if conf and not is_path:
|
||||
thumb = "%s/%s" % (conf["path_to_thumb"], image)
|
||||
full = "%s/%s" % (conf["path_to_fullsize"], image)
|
||||
tag.append(template % (full,thumb))
|
||||
continue
|
||||
elif not conf and not is_path:
|
||||
msg = ("Warning: no path defined for image %s!" % image)
|
||||
print(msg,file=sys.stderr)
|
||||
else:
|
||||
pass
|
||||
tag.append(template % (image, image))
|
||||
tag.append("</div>")
|
||||
return tag
|
||||
|
||||
def markup(message, config):
|
||||
def is_image(s, image_formats):
|
||||
l = s.rsplit('.', maxsplit=1)
|
||||
if len(l) < 2:
|
||||
return False
|
||||
# Python 3.10.5
|
||||
# example result that had to be filtered:
|
||||
# string: started.
|
||||
# result: ['started', '']
|
||||
if l[1] == str(''):
|
||||
return False
|
||||
#print(s, l, file=sys.stderr)
|
||||
if l[1] in image_formats:
|
||||
return True
|
||||
return False
|
||||
|
||||
result = 0
|
||||
tagged = ""
|
||||
# support multiple images (gallery style)
|
||||
tags = [] # list of strings
|
||||
output = []
|
||||
gallery = []
|
||||
ptags = config["tag_paragraphs"]
|
||||
sep = ""
|
||||
if "line_separator" in config:
|
||||
sep = config["line_separator"]
|
||||
for line in message:
|
||||
images = [] # list of integers
|
||||
words = line.split()
|
||||
for i in range(len(words)):
|
||||
word = words[i]
|
||||
# don't help people click http
|
||||
if word.find("src=") == 0 or word.find("href=") == 0:
|
||||
continue
|
||||
elif word.find("https://") != -1:
|
||||
w = escape(word)
|
||||
new_word = ("<a href=\"%s\">%s</a>") % (w, w)
|
||||
words[i] = new_word
|
||||
elif word.find("#") != -1 and len(word) > 1:
|
||||
# split by unicode blank character if present
|
||||
# allows tagging such as #fanfic|tion
|
||||
w = word.split(chr(8206))
|
||||
# w[0] is the portion closest to the #
|
||||
tags.append(w[0])
|
||||
new_word = "<span class=\"hashtag\">%s</span>" % (w[0])
|
||||
if len(w) > 1:
|
||||
new_word += w[1]
|
||||
words[i] = new_word
|
||||
elif is_image(word, config["accepted_images"]):
|
||||
images.append(i)
|
||||
if len(images) > 0:
|
||||
# function invokes pop() which modifies list 'words'
|
||||
gc = config["gallery"] if "gallery" in config else None
|
||||
gallery = make_gallery(images, words, gc)
|
||||
if ptags and len(words) > 0:
|
||||
words.insert(0,"<p>")
|
||||
words.append("</p>")
|
||||
output.append(" ".join(words))
|
||||
# avoid paragraph with an image gallery
|
||||
if len(gallery) > 0:
|
||||
output.append("".join(gallery))
|
||||
gallery = []
|
||||
return sep.join(output), tags
|
||||
|
||||
# apply basic HTML formatting - only div class here is gallery
|
||||
from html import escape
|
||||
class Post:
|
||||
def __init__(self, ts, msg):
|
||||
self.timestamp = ts.strip() # string
|
||||
self.message = msg # list
|
||||
|
||||
# format used for sorting
|
||||
def get_epoch_time(self):
|
||||
t = dateutil.parser.parse(self.timestamp)
|
||||
return int(t.timestamp())
|
||||
|
||||
# format used for display
|
||||
def get_short_time(self):
|
||||
t = dateutil.parser.parse(self.timestamp)
|
||||
return t.strftime("%y %b %d")
|
||||
|
||||
def parse_txt(filename):
|
||||
content = []
|
||||
with open(filename, 'r') as f:
|
||||
content = f.readlines()
|
||||
posts = [] # list of posts - same order as file
|
||||
message = [] # list of lines
|
||||
# {-1 = init;; 0 = timestamp is next, 1 = message is next}
|
||||
state = -1
|
||||
timestamp = ""
|
||||
for line in content:
|
||||
if state == -1:
|
||||
state = 0
|
||||
continue
|
||||
elif state == 0:
|
||||
timestamp = line
|
||||
state = 1
|
||||
elif state == 1:
|
||||
if len(line) > 1:
|
||||
message.append(line)
|
||||
else:
|
||||
p = Post(timestamp, message)
|
||||
posts.append(p)
|
||||
# reset
|
||||
message = []
|
||||
state = 0
|
||||
return posts
|
||||
|
||||
def get_posts(filename, config):
|
||||
posts = parse_txt(filename)
|
||||
taginfos = []
|
||||
tagcloud = dict() # (tag, count)
|
||||
tagged = dict() # (tag, index of message)
|
||||
total = len(posts)
|
||||
count = total
|
||||
index = count # - 1
|
||||
timeline = []
|
||||
btns = None
|
||||
for post in posts:
|
||||
markedup, tags = markup(post.message, config)
|
||||
count -= 1
|
||||
index -= 1
|
||||
timeline.append(
|
||||
make_post(count, post.get_short_time(), config, markedup)
|
||||
)
|
||||
for tag in tags:
|
||||
if tagcloud.get(tag) == None:
|
||||
tagcloud[tag] = 0
|
||||
tagcloud[tag] += 1
|
||||
if tagged.get(tag) == None:
|
||||
tagged[tag] = []
|
||||
tagged[tag].append(index)
|
||||
return timeline, tagcloud, tagged
|
||||
|
||||
def make_tagcloud(d, rell):
|
||||
sorted_d = {k: v for k,
|
||||
v in sorted(d.items(),
|
||||
key=lambda item: -item[1])}
|
||||
output = []
|
||||
fmt = "<span class=\"hashtag\"><a href=\"%s\">%s(%i)</a></span>"
|
||||
#fmt = "<span class=\"hashtag\">%s(%i)</span>"
|
||||
for key in d.keys():
|
||||
link = rell % key[1:]
|
||||
output.append(fmt % (link, key, d[key]))
|
||||
return output
|
||||
|
||||
class Paginator:
|
||||
def __init__(self, post_count, ppp, loc=None):
|
||||
if post_count <= 0:
|
||||
raise Exception
|
||||
if not loc:
|
||||
loc = "pages"
|
||||
if loc and not os.path.exists(loc):
|
||||
os.mkdir(loc)
|
||||
self.TOTAL_POSTS = post_count
|
||||
self.PPP = ppp
|
||||
self.TOTAL_PAGES = int(post_count/self.PPP)
|
||||
self.SUBDIR = loc
|
||||
self.FILENAME = "%i.html"
|
||||
self.written = []
|
||||
|
||||
def toc(self, current_page=None, path=None): #style 1
|
||||
if self.TOTAL_PAGES < 1:
|
||||
return "[no pages]"
|
||||
if path == None:
|
||||
path = self.SUBDIR
|
||||
# For page 'n' do not create an anchor tag
|
||||
fmt = "<a href=\"%s\">[%i]</a>" #(filename, page number)
|
||||
anchors = []
|
||||
for i in reversed(range(self.TOTAL_PAGES)):
|
||||
if i != current_page:
|
||||
x = path + "/" + (self.FILENAME % i)
|
||||
anchors.append(fmt % (x, i))
|
||||
else:
|
||||
anchors.append("<b>[%i]</b>" % i)
|
||||
return "\n".join(anchors)
|
||||
|
||||
# makes one page
|
||||
def singlepage(self, template, tagcloud, timeline_, i=None, p=None):
|
||||
tc = "\n".join(tagcloud)
|
||||
tl = "\n\n".join(timeline_)
|
||||
toc = self.toc(i, p)
|
||||
return template.format(
|
||||
postcount=self.TOTAL_POSTS, tags=tc, pages=toc, timeline=tl
|
||||
)
|
||||
|
||||
def paginate(self, template, tagcloud, timeline, is_tagline=False):
|
||||
outfile = "%s/%s" % (self.SUBDIR, self.FILENAME)
|
||||
timeline.reverse() # reorder from oldest to newest
|
||||
start = 0
|
||||
for i in range(start, self.TOTAL_PAGES):
|
||||
fn = outfile % i
|
||||
with open(fn, 'w') as f:
|
||||
self.written.append(fn)
|
||||
prev = self.PPP * i
|
||||
curr = self.PPP * (i+1)
|
||||
sliced = timeline[prev:curr]
|
||||
sliced.reverse()
|
||||
f.write(self.singlepage(template, tagcloud, sliced, i, "."))
|
||||
return
|
||||
|
||||
import argparse
|
||||
if __name__ == "__main__":
|
||||
def sort(filename):
|
||||
def export(new_content, new_filename):
|
||||
with open(new_filename, 'w') as f:
|
||||
print(file=f)
|
||||
for post in new_content:
|
||||
print(post.timestamp, file=f)
|
||||
print("".join(post.message), file=f)
|
||||
return
|
||||
posts = parse_txt(filename)
|
||||
posts.sort(key=lambda e: e.get_epoch_time())
|
||||
outfile = ("%s.sorted" % filename)
|
||||
print("Sorted text written to ", outfile)
|
||||
export(reversed(posts), outfile)
|
||||
|
||||
def get_args():
|
||||
p = argparse.ArgumentParser()
|
||||
p.add_argument("template", help="an html template file")
|
||||
p.add_argument("content", help="text file for microblog content")
|
||||
p.add_argument("--sort", \
|
||||
help="sorts content from oldest to newest"
|
||||
" (this is a separate operation from page generation)", \
|
||||
action="store_true")
|
||||
args = p.parse_args()
|
||||
if args.sort:
|
||||
sort(args.content)
|
||||
exit()
|
||||
return args.template, args.content
|
||||
|
||||
# assume relative path
|
||||
def demote_css(template, css_list, level=1):
|
||||
prepend = ""
|
||||
if level == 1:
|
||||
prepend = '.'
|
||||
else:
|
||||
for i in range(level):
|
||||
prepend = ("../%s" % prepend)
|
||||
tpl = template
|
||||
for css in css_list:
|
||||
tpl = tpl.replace(css, ("%s%s" % (prepend, css) ))
|
||||
return tpl
|
||||
|
||||
# needs review / clean-up
|
||||
# ideally relate 'lvl' with sub dir instead of hardcoding
|
||||
def writepage(template, timeline, tagcloud, config, subdir = None):
|
||||
html = ""
|
||||
with open(template,'r') as f:
|
||||
html = f.read()
|
||||
try:
|
||||
count = len(timeline)
|
||||
p = config["postsperpage"]
|
||||
pagectrl = Paginator(count, p, subdir)
|
||||
except ZeroDivisionError as e:
|
||||
print("error: ",e, ". check 'postsperpage' in config", file=sys.stderr)
|
||||
exit()
|
||||
except Exception as e:
|
||||
print("error: ",e, ("(number of posts = %i)" % count), file=sys.stderr)
|
||||
exit()
|
||||
latest = timeline if count <= pagectrl.PPP else timeline[:pagectrl.PPP]
|
||||
if subdir == None: # if top level page
|
||||
lvl = 1
|
||||
tcloud = make_tagcloud(tagcloud, "./tags/%s/latest.html")
|
||||
print(pagectrl.singlepage(html, tcloud, latest))
|
||||
tcloud = make_tagcloud(tagcloud, "../tags/%s/latest.html")
|
||||
pagectrl.paginate(
|
||||
demote_css(html, config["relative_css"], lvl),
|
||||
tcloud, timeline
|
||||
)
|
||||
else: # if timelines per tag
|
||||
is_tagline = True
|
||||
lvl = 2
|
||||
newhtml = demote_css(html, config["relative_css"], lvl)
|
||||
tcloud = make_tagcloud(tagcloud, "../%s/latest.html")
|
||||
fn = "%s/latest.html" % subdir
|
||||
with open(fn, 'w') as f:
|
||||
pagectrl.written.append(fn)
|
||||
f.write(
|
||||
pagectrl.singlepage(newhtml, tcloud, latest, p=".")
|
||||
)
|
||||
pagectrl.paginate(newhtml, tcloud, timeline, is_tagline)
|
||||
return pagectrl.written
|
||||
|
||||
import toml
|
||||
def load_settings():
|
||||
s = dict()
|
||||
filename = "settings.toml"
|
||||
if os.path.exists(filename):
|
||||
with open(filename, 'r') as f:
|
||||
s = toml.loads(f.read())
|
||||
else:
|
||||
s = None
|
||||
return s
|
||||
|
||||
def main():
|
||||
tpl, content = get_args()
|
||||
cfg = load_settings()
|
||||
if cfg == None:
|
||||
print("exit: no settings.toml found.", file=sys.stderr)
|
||||
return
|
||||
if "post" not in cfg:
|
||||
print("exit: table 'post' absent in settings.toml", file=sys.stderr)
|
||||
return
|
||||
if "page" not in cfg:
|
||||
print("exit: table 'page' absent in settings.toml", file=sys.stderr)
|
||||
return
|
||||
tl, tc, tg = get_posts(content, cfg["post"])
|
||||
if tl == []:
|
||||
return
|
||||
# main timeline
|
||||
updated = []
|
||||
updated += writepage(tpl, tl, tc, cfg["page"])
|
||||
# timeline per tag
|
||||
if tc != dict() and tg != dict():
|
||||
if not os.path.exists("tags"):
|
||||
os.mkdir("tags")
|
||||
for key in tg.keys():
|
||||
tagline = []
|
||||
for index in tg[key]:
|
||||
tagline.append(tl[index])
|
||||
# [1:] means to omit hashtag from dir name
|
||||
updated += writepage(
|
||||
tpl, tagline, tc, cfg["page"], \
|
||||
subdir="tags/%s" % key[1:] \
|
||||
)
|
||||
with open("updatedfiles.txt", 'w') as f:
|
||||
for filename in updated:
|
||||
print(filename, file=f) # sys.stderr)
|
||||
if "latestpage" in cfg:
|
||||
print(cfg["latestpage"], file=f)
|
||||
try:
|
||||
main()
|
||||
except KeyError as e:
|
||||
traceback.print_exc()
|
||||
print("\n\tA key may be missing from your settings file.", file=sys.stderr)
|
||||
except dateutil.parser._parser.ParserError as e:
|
||||
traceback.print_exc()
|
||||
print("\n\tFailed to interpret a date from string..",
|
||||
"\n\tYour file of posts may be malformed.",
|
||||
"\n\tCheck if your file starts with a line break.", file=sys.stderr)
|
13
requirements.txt
Normal file
13
requirements.txt
Normal file
@ -0,0 +1,13 @@
|
||||
pycurl
|
||||
# ==7.45.3
|
||||
# pycurl==7.45.2
|
||||
|
||||
python_dateutil
|
||||
# ==2.9.0.post0
|
||||
# python_dateutil==2.8.2
|
||||
|
||||
toml
|
||||
# ==0.10.2
|
||||
|
||||
tomlkit
|
||||
# ==0.12.5
|
136
src/check-settings.py
Normal file
136
src/check-settings.py
Normal file
@ -0,0 +1,136 @@
|
||||
|
||||
import os, argparse
|
||||
from tomlkit import loads
|
||||
from tomlkit import dump
|
||||
|
||||
def nest_dictionary(d, keys, val):
|
||||
for key in keys:
|
||||
d = d.setdefault(key, val)
|
||||
return d
|
||||
|
||||
class MicroblogConfig:
|
||||
def __init__(self, given_config):
|
||||
self.is_outdated = False
|
||||
self.updated = given_config
|
||||
|
||||
def compare(self, sref, suser, keylist=[]):
|
||||
# subtable of ref, subtable of user
|
||||
updated = self.updated
|
||||
# nnavigate to table
|
||||
if keylist != []:
|
||||
for key in keylist:
|
||||
sref = sref[key]
|
||||
for key in keylist:
|
||||
suser = suser[key]
|
||||
for key in keylist:
|
||||
updated = updated[key]
|
||||
for key in sref:
|
||||
if key not in suser:
|
||||
self.is_outdated = True
|
||||
updated[key] =sref[key]
|
||||
print("noticed '", key, "' missing from ", keylist)
|
||||
nest_dictionary(self.updated, keylist, updated)
|
||||
return
|
||||
|
||||
def check(self, r, u): # (reference, user)
|
||||
for key in r:
|
||||
if key == "latestpages": continue;
|
||||
# post and webring have subtables
|
||||
# webring.profile
|
||||
# webring.following
|
||||
# webring.following.internal-avatars
|
||||
# post.gallery
|
||||
# post.buttons
|
||||
try:
|
||||
self.compare(r, u, [key])
|
||||
except KeyError:
|
||||
u[key] = dict()
|
||||
print("missing top-level table '", key, '\'')
|
||||
self.compare(r, u, [key])
|
||||
if key == "webring":
|
||||
self.compare(r, u, ["webring", "profile"])
|
||||
self.compare(r, u, ["webring", "following"])
|
||||
self.compare(r, u, ["webring", "following", "internal-avatars"])
|
||||
if key == "post":
|
||||
self.compare(r, u, ["post", "gallery"])
|
||||
self.compare(r, u, ["post", "buttons"])
|
||||
pass
|
||||
|
||||
def load_files(user_conf_file):
|
||||
script_dir = os.path.dirname(
|
||||
os.path.abspath(__file__))
|
||||
parent_dir = os.path.abspath(
|
||||
os.path.join(script_dir, os.pardir))
|
||||
target_folder = "example"
|
||||
example = os.path.abspath(
|
||||
os.path.join(parent_dir, target_folder))
|
||||
ref_file = "%s/%s" % (example, "/settings.toml")
|
||||
if not os.path.exists(ref_file):
|
||||
return
|
||||
ref_conf = dict()
|
||||
with open(ref_file, 'r') as f:
|
||||
ref_conf = loads(f.read())
|
||||
user_conf = dict()
|
||||
with open(user_conf_file, 'r') as f:
|
||||
user_conf = loads(f.read())
|
||||
return ref_conf, user_conf
|
||||
|
||||
def multi_prompt(message):
|
||||
try:
|
||||
while True:
|
||||
user_input = int(input(f"{message}").lower())
|
||||
if user_input < 3:
|
||||
return user_input
|
||||
else:
|
||||
return 0
|
||||
except KeyboardInterrupt:
|
||||
print()
|
||||
except ValueError:
|
||||
pass
|
||||
return 0
|
||||
|
||||
def get_args():
|
||||
p = argparse.ArgumentParser()
|
||||
p.add_argument("--no-prompt", action="store_true", \
|
||||
help="does not ask what to do if missing keys are detected")
|
||||
p.add_argument("-c", "--check", type=str,\
|
||||
help="sets/changes the file to be checked (default: settings.toml)")
|
||||
args = p.parse_args()
|
||||
if args.no_prompt:
|
||||
print("'--no-prompt' set")
|
||||
if args.check:
|
||||
print("--check set", args.check)
|
||||
else:
|
||||
args.check = "settings.toml"
|
||||
return args.no_prompt, args.check
|
||||
|
||||
def main(is_no_prompt, user_conf_file="settings.toml"):
|
||||
print("checking ", user_conf_file)
|
||||
reference, user_edited = load_files(user_conf_file)
|
||||
mcfg = MicroblogConfig(user_edited)
|
||||
mcfg.check(reference, user_edited)
|
||||
if mcfg.is_outdated == False:
|
||||
print("Your settings file is OK!")
|
||||
return
|
||||
message = """
|
||||
Your settings file is outdated.
|
||||
Do you want to...
|
||||
\t 1. save new settings to new file
|
||||
\t 2. update/overwrite existing settings
|
||||
\t *. do nothing
|
||||
"""
|
||||
response = 0 if is_no_prompt else multi_prompt(message)
|
||||
out_file = str()
|
||||
if response == 0:
|
||||
return
|
||||
elif response == 1:
|
||||
out_file = "new.toml"
|
||||
elif response == 2:
|
||||
out_file = user_conf_file
|
||||
with open(out_file, 'w') as f:
|
||||
dump(mcfg.updated, f)
|
||||
print("Wrote updated config to ", out_file)
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(*get_args())
|
597
src/microblog.py
Normal file
597
src/microblog.py
Normal file
@ -0,0 +1,597 @@
|
||||
|
||||
import sys, os, traceback
|
||||
import dateutil.parser
|
||||
from time import strftime, localtime
|
||||
|
||||
def make_buttons(btn_conf, msg_id):
|
||||
fmt = btn_conf["format"]
|
||||
buttons = str()
|
||||
for key in btn_conf["links"]:
|
||||
url = btn_conf["links"][key]
|
||||
if url[-1] == '=':
|
||||
url += str(msg_id)
|
||||
buttons += fmt.format(
|
||||
__url__=url, __label__ = key)
|
||||
return buttons
|
||||
|
||||
# apply div classes for use with .css
|
||||
def make_post(num, timestamp, conf, msg):
|
||||
fmt = conf["format"]
|
||||
if "buttons" in conf:
|
||||
b = make_buttons(conf["buttons"], num)
|
||||
else:
|
||||
b = ""
|
||||
return fmt.format(
|
||||
__timestamp__=timestamp, __num__=num, __msg__=msg, __btn__=b)
|
||||
|
||||
def make_gallery(indices, w, conf=None):
|
||||
tag = []
|
||||
if indices == []:
|
||||
return tag
|
||||
template = '''
|
||||
<div class=\"panel\">
|
||||
<a href=\"%s\"><img src=\"%s\" class=\"embed\"></a>
|
||||
</div>
|
||||
'''
|
||||
tag.append("<div class=\"gallery\">")
|
||||
for index in reversed(indices):
|
||||
image = w.pop(index)
|
||||
is_path = image[0] == '.' or image[0] == '/'
|
||||
if conf and not is_path:
|
||||
thumb = "%s/%s" % (conf["path_to_thumb"], image)
|
||||
full = "%s/%s" % (conf["path_to_fullsize"], image)
|
||||
tag.append(template % (full,thumb))
|
||||
continue
|
||||
elif not conf and not is_path:
|
||||
msg = ("Warning: no path defined for image %s!" % image)
|
||||
print(msg,file=sys.stderr)
|
||||
else:
|
||||
pass
|
||||
tag.append(template % (image, image))
|
||||
tag.append("</div>")
|
||||
return tag
|
||||
|
||||
# apply basic HTML formatting - only div class here is gallery
|
||||
from html.parser import HTMLParser
|
||||
class My_Html_Parser(HTMLParser):
|
||||
def __init__(self, ignore_list):
|
||||
super().__init__()
|
||||
self.stack = []
|
||||
self.completed_by = ""
|
||||
# ignore common inline tags
|
||||
self.ignore = ignore_list
|
||||
|
||||
def handle_starttag(self, tag, attrs):
|
||||
self.stack.append(tag)
|
||||
self.is_completed_by = ""
|
||||
|
||||
def handle_endtag(self, tag):
|
||||
# remove an item == tag from the end of the list
|
||||
i = len(self.stack) - 1
|
||||
last = self.stack[i]
|
||||
while i > -1:
|
||||
if tag == last:
|
||||
self.stack.pop(i)
|
||||
break
|
||||
i -= 1
|
||||
last = self.stack[i]
|
||||
if self.stack == [] and tag not in self.ignore:
|
||||
self.completed_by = "</%s>" % tag
|
||||
|
||||
from html import escape
|
||||
def markup(message, config):
|
||||
def is_image(s, image_formats):
|
||||
l = s.rsplit('.', maxsplit=1)
|
||||
if len(l) < 2:
|
||||
return False
|
||||
# Python 3.10.5
|
||||
# example result that had to be filtered:
|
||||
# string: started.
|
||||
# result: ['started', '']
|
||||
if l[1] == str(''):
|
||||
return False
|
||||
#print(s, l, file=sys.stderr)
|
||||
if l[1] in image_formats:
|
||||
return True
|
||||
return False
|
||||
|
||||
def automarkup(list_of_words):
|
||||
images = []
|
||||
tags = []
|
||||
for i in range(len(list_of_words)):
|
||||
word = list_of_words[i]
|
||||
# don't help people click http
|
||||
if word.find("src=") == 0 or word.find("href=") == 0:
|
||||
continue
|
||||
elif word.find("https://") != -1:
|
||||
w = escape(word)
|
||||
new_word = ("<a href=\"%s\">%s</a>") % (w, w)
|
||||
list_of_words[i] = new_word
|
||||
elif word.find("#") != -1 and len(word) > 1:
|
||||
# split by unicode blank character if present
|
||||
# allows tagging such as #fanfic|tion
|
||||
w = word.split(chr(8206))
|
||||
# w[0] is the portion closest to the #
|
||||
tags.append(w[0])
|
||||
new_word = "<span class=\"hashtag\">%s</span>" % (w[0])
|
||||
if len(w) > 1:
|
||||
new_word += w[1]
|
||||
list_of_words[i] = new_word
|
||||
elif is_image(word, config["accepted_images"]):
|
||||
images.append(i)
|
||||
return list_of_words, images, tags
|
||||
|
||||
tags = [] # list of strings
|
||||
output = []
|
||||
gallery = []
|
||||
ptags = config["tag_paragraphs"]
|
||||
ignore = []
|
||||
if "inline_tags" in config:
|
||||
ignore = config["inline_tags"]
|
||||
parser = My_Html_Parser(ignore)
|
||||
sep = ""
|
||||
for line in message:
|
||||
images = [] # list of integers
|
||||
parser.feed(line)
|
||||
if parser.stack == [] \
|
||||
and (parser.completed_by == "" or parser.completed_by not in line):
|
||||
words, images, t = automarkup(line.split())
|
||||
tags += t
|
||||
if len(images) > 0:
|
||||
# function invokes pop() which modifies list 'words'
|
||||
gc = config["gallery"] if "gallery" in config else None
|
||||
gallery = make_gallery(images, words, gc)
|
||||
elif ptags and len(words) > 0:
|
||||
words.insert(0,"<p>")
|
||||
words.append("</p>")
|
||||
output.append(" ".join(words))
|
||||
elif "pre" in parser.stack \
|
||||
and ("<pre>" not in line \
|
||||
and "<code>" not in line and "</code>" not in line):
|
||||
output.append(escape(line))
|
||||
else: # <pre> is in the parser.stack
|
||||
output.append(line.strip())
|
||||
# avoid paragraph with an image gallery
|
||||
if len(gallery) > 0:
|
||||
output.append("".join(gallery))
|
||||
gallery = []
|
||||
return sep.join(output), tags
|
||||
|
||||
class Post:
|
||||
def __init__(self, ts, msg):
|
||||
self.timestamp = ts.strip() # string
|
||||
self.message = msg # list
|
||||
|
||||
# format used for sorting
|
||||
def get_epoch_time(self):
|
||||
t = dateutil.parser.parse(self.timestamp)
|
||||
return int(t.timestamp())
|
||||
|
||||
# format used for display
|
||||
def get_short_time(self, form):
|
||||
if form == "":
|
||||
form = "%y %b %d"
|
||||
t = dateutil.parser.parse(self.timestamp)
|
||||
return t.strftime(form)
|
||||
|
||||
def parse_txt(filename):
|
||||
content = []
|
||||
with open(filename, 'r') as f:
|
||||
content = f.readlines()
|
||||
posts = [] # list of posts - same order as file
|
||||
message = [] # list of lines
|
||||
# {-1 = init;; 0 = timestamp is next, 1 = message is next}
|
||||
state = -1
|
||||
timestamp = ""
|
||||
for line in content:
|
||||
if state == -1:
|
||||
state = 0
|
||||
continue
|
||||
elif state == 0:
|
||||
timestamp = line
|
||||
state = 1
|
||||
elif state == 1:
|
||||
if len(line) > 1:
|
||||
message.append(line)
|
||||
else:
|
||||
p = Post(timestamp, message)
|
||||
posts.append(p)
|
||||
# reset
|
||||
message = []
|
||||
state = 0
|
||||
return posts
|
||||
|
||||
def get_posts(posts, config, newest = None):
|
||||
taginfos = []
|
||||
tagcloud = dict() # (tag, count)
|
||||
tagged = dict() # (tag, index of message)
|
||||
total = len(posts)
|
||||
count = total
|
||||
index = count # - 1
|
||||
timeline = []
|
||||
df = ""
|
||||
subset = []
|
||||
if "date_format" in config:
|
||||
df = config["date_format"]
|
||||
for post in posts:
|
||||
markedup, tags = markup(post.message, config)
|
||||
count -= 1
|
||||
index -= 1
|
||||
timeline.append(
|
||||
make_post(count, post.get_short_time(df), config, markedup)
|
||||
)
|
||||
for tag in tags:
|
||||
if tagcloud.get(tag) == None:
|
||||
tagcloud[tag] = 0
|
||||
tagcloud[tag] += 1
|
||||
if newest is not None and (total - (1 + count)) < newest:
|
||||
subset.append(tag)
|
||||
if newest is None \
|
||||
or newest is not None and tag in subset:
|
||||
if tagged.get(tag) == None:
|
||||
tagged[tag] = []
|
||||
tagged[tag].append(index)
|
||||
# print(tagged, file=sys.stderr)
|
||||
return timeline, tagcloud, tagged
|
||||
|
||||
def make_tagcloud(d, rell):
|
||||
sorted_d = {k: v for k,
|
||||
v in sorted(d.items(),
|
||||
key=lambda item: -item[1])}
|
||||
output = []
|
||||
fmt = "<span class=\"hashtag\"><a href=\"%s\">%s(%i)</a></span>"
|
||||
#fmt = "<span class=\"hashtag\">%s(%i)</span>"
|
||||
for key in d.keys():
|
||||
link = rell % key[1:]
|
||||
output.append(fmt % (link, key, d[key]))
|
||||
return output
|
||||
|
||||
class Paginator:
|
||||
def __init__(self, post_count, ppp, loc=None):
|
||||
if post_count <= 0:
|
||||
raise Exception
|
||||
if not loc:
|
||||
loc = "pages"
|
||||
if loc and not os.path.exists(loc):
|
||||
os.mkdir(loc)
|
||||
self.TOTAL_POSTS = post_count
|
||||
self.PPP = ppp
|
||||
self.TOTAL_PAGES = int(post_count/self.PPP)
|
||||
self.SUBDIR = loc
|
||||
self.FILENAME = "%i.html"
|
||||
self.written = []
|
||||
|
||||
def toc(self, current_page=None, path=None): #style 1
|
||||
if self.TOTAL_PAGES < 1:
|
||||
return "[no pages]"
|
||||
if path == None:
|
||||
path = self.SUBDIR
|
||||
# For page 'n' do not create an anchor tag
|
||||
fmt = "<a href=\"%s\">[%i]</a>" #(filename, page number)
|
||||
anchors = []
|
||||
for i in reversed(range(self.TOTAL_PAGES)):
|
||||
if i != current_page:
|
||||
x = path + "/" + (self.FILENAME % i)
|
||||
anchors.append(fmt % (x, i))
|
||||
else:
|
||||
anchors.append("<b>[%i]</b>" % i)
|
||||
return "\n".join(anchors)
|
||||
|
||||
# makes one page
|
||||
def singlepage(self, template, tagcloud, timeline_, i=None, p=None):
|
||||
tc = "\n".join(tagcloud)
|
||||
tl = "\n\n".join(timeline_)
|
||||
toc = self.toc(i, p)
|
||||
return template.format(
|
||||
postcount=self.TOTAL_POSTS, tags=tc, pages=toc, timeline=tl
|
||||
)
|
||||
|
||||
def paginate(self, template, tagcloud, timeline, is_tagline=False):
|
||||
outfile = "%s/%s" % (self.SUBDIR, self.FILENAME)
|
||||
l = len(timeline)
|
||||
for i in range(0, self.TOTAL_PAGES):
|
||||
fn = outfile % i
|
||||
with open(fn, 'w') as f:
|
||||
self.written.append(fn)
|
||||
prev = l - (self.PPP * i)
|
||||
curr = l - self.PPP * (i+1)
|
||||
sliced = timeline[curr:prev]
|
||||
f.write(self.singlepage(template, tagcloud, sliced, i, "."))
|
||||
return
|
||||
|
||||
import argparse
|
||||
if __name__ == "__main__":
|
||||
def sort(filename):
|
||||
def export(new_content, new_filename):
|
||||
with open(new_filename, 'w') as f:
|
||||
print(file=f)
|
||||
for post in new_content:
|
||||
print(post.timestamp, file=f)
|
||||
print("".join(post.message), file=f)
|
||||
return
|
||||
posts = parse_txt(filename)
|
||||
posts.sort(key=lambda e: e.get_epoch_time())
|
||||
outfile = ("%s.sorted" % filename)
|
||||
print("Sorted text written to ", outfile)
|
||||
export(reversed(posts), outfile)
|
||||
|
||||
def get_args():
|
||||
p = argparse.ArgumentParser()
|
||||
p.add_argument("template", help="an html template file")
|
||||
p.add_argument("content", help="text file for microblog content")
|
||||
p.add_argument("--sort", action="store_true", \
|
||||
help="sorts content from oldest to newest"
|
||||
" (this is a separate operation from page generation)")
|
||||
p.add_argument("--skip-fetch", action="store_true", \
|
||||
help="skips fetching profile data from remote sources;"
|
||||
" has no effect if webring is not enabled")
|
||||
p.add_argument("--new-posts", type=int, nargs='?',
|
||||
help="generate pages based only on new entries; " \
|
||||
"if I wrote 5 new posts then --new-posts=5'")
|
||||
args = p.parse_args()
|
||||
if args.sort:
|
||||
sort(args.content)
|
||||
exit()
|
||||
return args.template, args.content, args.skip_fetch, args.new_posts
|
||||
|
||||
# assume relative path
|
||||
def demote_css(template, css_list, level=1):
|
||||
prepend = ""
|
||||
if level == 1:
|
||||
prepend = '.'
|
||||
else:
|
||||
for i in range(level):
|
||||
prepend = ("../%s" % prepend)
|
||||
tpl = template
|
||||
for css in css_list:
|
||||
tpl = tpl.replace(css, ("%s%s" % (prepend, css) ))
|
||||
return tpl
|
||||
|
||||
def writepage(template, timeline, tagcloud, config, subdir = None, paginate = True):
|
||||
count = len(timeline)
|
||||
html = ""
|
||||
with open(template,'r') as f:
|
||||
html = f.read()
|
||||
try:
|
||||
p = config["postsperpage"]
|
||||
pagectrl = Paginator(count, p, subdir)
|
||||
except ZeroDivisionError as e:
|
||||
print("error: ",e, ". check 'postsperpage' in config", file=sys.stderr)
|
||||
exit()
|
||||
except Exception as e:
|
||||
print("error: ",e, ("(number of posts = %i)" % count), file=sys.stderr)
|
||||
exit()
|
||||
index = config["landing_page"]
|
||||
latest = timeline[:pagectrl.PPP]
|
||||
link_from_top = "./tags/%s/" + index
|
||||
link_from_subdir = "../tags/%s/" + index
|
||||
link_from_tagdir = "../%s/" + index
|
||||
cloud = ""
|
||||
level = 1
|
||||
is_tagline = False
|
||||
if subdir == None: # if top level page
|
||||
cloud = make_tagcloud(tagcloud, link_from_top)
|
||||
print(pagectrl.singlepage(html, cloud, latest))
|
||||
cloud = make_tagcloud(tagcloud, link_from_subdir)
|
||||
else:
|
||||
if subdir != "webring": # timelines per tag
|
||||
is_tagline = True
|
||||
level += 1
|
||||
cloud = make_tagcloud(tagcloud, link_from_tagdir)
|
||||
else:
|
||||
cloud = make_tagcloud(tagcloud, link_from_subdir)
|
||||
demoted = demote_css(html, config["relative_css"], level)
|
||||
filename = "%s/%s" % (subdir, index)
|
||||
with open(filename, 'w') as f: # landing page for tag
|
||||
pagectrl.written.append(filename)
|
||||
page = pagectrl.singlepage(demoted, cloud, latest, p=".")
|
||||
f.write(page)
|
||||
if paginate:
|
||||
pagectrl.paginate(
|
||||
demote_css(html, config["relative_css"], level),
|
||||
cloud, timeline, is_tagline)
|
||||
return pagectrl.written
|
||||
|
||||
import toml
|
||||
def load_settings(filename = "settings.toml"):
|
||||
s = dict()
|
||||
if os.path.exists(filename):
|
||||
with open(filename, 'r') as f:
|
||||
s = toml.loads(f.read())
|
||||
else:
|
||||
s = None
|
||||
return s
|
||||
|
||||
import json
|
||||
def export_profile(post_count, last_update, config):
|
||||
if "profile" not in config:
|
||||
return
|
||||
p = config["profile"]
|
||||
p["post-count"] = post_count
|
||||
p["last-updated"] = last_update
|
||||
if "username" not in p or "url" not in p:
|
||||
print("Warning: no profile exported", file=sys.stderr)
|
||||
return
|
||||
with open(config["file_output"], 'w') as f:
|
||||
print(json.dumps(p), file=f)
|
||||
|
||||
def get_webring(f_cfg):
|
||||
import pycurl
|
||||
from io import BytesIO
|
||||
def get_proxy():
|
||||
proxy = ""
|
||||
if "http_proxy" in os.environ:
|
||||
proxy = os.environ["http_proxy"]
|
||||
elif "https_proxy" in os.environ:
|
||||
proxy = os.environ["https_proxy"]
|
||||
host = proxy[proxy.rfind('/') + 1: proxy.rfind(':')]
|
||||
port = proxy[proxy.rfind(':') + 1:]
|
||||
foo = proxy.find("socks://") >= 0 or proxy.find("socks5h://")
|
||||
return host, int(port), foo
|
||||
|
||||
def fetch(url_list):
|
||||
curl = pycurl.Curl()
|
||||
if "http_proxy" in os.environ or "https_proxy" in os.environ:
|
||||
hostname, port_no, is_socks = get_proxy()
|
||||
curl.setopt(pycurl.PROXY, hostname)
|
||||
curl.setopt(pycurl.PROXYPORT, port_no)
|
||||
if is_socks:
|
||||
curl.setopt(pycurl.PROXYTYPE, pycurl.PROXYTYPE_SOCKS5_HOSTNAME)
|
||||
datum = []
|
||||
meta = []
|
||||
for url in url_list:
|
||||
buf = BytesIO()
|
||||
curl.setopt(curl.WRITEDATA, buf)
|
||||
curl.setopt(pycurl.URL, url)
|
||||
try:
|
||||
curl.perform()
|
||||
datum.append(buf)
|
||||
meta.append(curl.getinfo(curl.CONTENT_TYPE))
|
||||
except pycurl.error as e:
|
||||
print(e,": ", url, file=sys.stderr)
|
||||
# print(buf.getvalue(),"\n\t", curl.getinfo(curl.CONTENT_TYPE), file=sys.stderr)
|
||||
curl.close()
|
||||
assert(len(datum) == len(meta))
|
||||
return datum, meta
|
||||
|
||||
def to_json(curl_outs):
|
||||
json_objs = []
|
||||
for buf in curl_outs:
|
||||
try:
|
||||
json_objs.append(json.loads(buf.getvalue()))
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return json_objs
|
||||
|
||||
def render(profiles, template, date_format):
|
||||
rendered = []
|
||||
SHORT_BIO_LIMIT = 150
|
||||
for profile in profiles:
|
||||
try:
|
||||
epoch_timestamp = profile["last-updated"]
|
||||
if not isinstance(epoch_timestamp, int):
|
||||
epoch_timestamp = 0
|
||||
post_count = profile["post-count"]
|
||||
if not isinstance(post_count, int):
|
||||
post_count = 0
|
||||
self_desc = profile["short-bio"]
|
||||
if len(profile["short-bio"]) >= SHORT_BIO_LIMIT:
|
||||
self_desc = profile["short-bio"][:SHORT_BIO_LIMIT] + "..."
|
||||
foo = template.format(
|
||||
__avatar__=escape(profile["avatar"]),
|
||||
__handle__=escape(profile["username"]),
|
||||
__url__=escape(profile["url"]),
|
||||
__post_count__ = post_count,
|
||||
__shortbio__= escape(self_desc),
|
||||
__lastupdated__= strftime(
|
||||
date_format, localtime(epoch_timestamp)) )
|
||||
rendered.append(foo)
|
||||
except KeyError as e:
|
||||
print("remote profile is missing key: ", e, file=sys.stderr)
|
||||
print("\tsource: ", profile, file=sys.stderr)
|
||||
return rendered
|
||||
|
||||
def get_avatars(profiles, save_path, img_src):
|
||||
import hashlib
|
||||
imgs, info = fetch([p["avatar"] for p in profiles])
|
||||
length = len(imgs)
|
||||
if length != len(profiles) or length == 0:
|
||||
print("error in retrieving images", file=sys.stderr)
|
||||
return
|
||||
for i in range(0,length):
|
||||
content_type = info[i].split('/')
|
||||
ext = content_type.pop()
|
||||
if content_type.pop() != "image":
|
||||
print("\tskip: not an image", file=sys.stderr)
|
||||
continue
|
||||
data = imgs[i].getvalue()
|
||||
h = hashlib.sha1(data).hexdigest()
|
||||
filename = "%s.%s" % (h, ext)
|
||||
path = "%s/%s" % (save_path, filename)
|
||||
profiles[i]["avatar"] = "%s/%s" % (img_src, filename)
|
||||
if not os.path.isfile(path):
|
||||
with open(path, "wb") as f:
|
||||
f.write(data)
|
||||
|
||||
j, m = fetch(f_cfg["list"])
|
||||
list_of_json_objs = to_json(j)
|
||||
if list_of_json_objs == []:
|
||||
print("no remote profiles loaded", file=sys.stderr)
|
||||
return []
|
||||
if f_cfg["internal-avatars"]["enabled"]:
|
||||
a = f_cfg["internal-avatars"]["local_path_to_avatars"]
|
||||
b = f_cfg["internal-avatars"]["path_to_avatars"]
|
||||
get_avatars(list_of_json_objs, a, b)
|
||||
try:
|
||||
list_of_json_objs.sort(key=lambda e: e["last-updated"], reverse=True)
|
||||
except KeyError: pass
|
||||
return render(list_of_json_objs, f_cfg["format"], f_cfg["date_format"])
|
||||
|
||||
def main(tpl, content, skip_fetch, new_posts):
|
||||
cfg = load_settings()
|
||||
if cfg == None:
|
||||
print("exit: no settings.toml found.", file=sys.stderr)
|
||||
return
|
||||
if "post" not in cfg:
|
||||
print("exit: table 'post' absent in settings.toml", file=sys.stderr)
|
||||
return
|
||||
if "page" not in cfg:
|
||||
print("exit: table 'page' absent in settings.toml", file=sys.stderr)
|
||||
return
|
||||
p = parse_txt(content)
|
||||
tl, tc, tg = get_posts(p, cfg["post"], new_posts)
|
||||
if tl == []:
|
||||
return
|
||||
# main timeline
|
||||
updated = []
|
||||
updated += writepage(tpl, tl, tc, cfg["page"],
|
||||
paginate=True if new_posts is None else False)
|
||||
# timeline per tag
|
||||
if tc != dict() and tg != dict():
|
||||
if not os.path.exists("tags"):
|
||||
os.mkdir("tags")
|
||||
tl.reverse()
|
||||
for key in tg.keys():
|
||||
tagline = []
|
||||
for index in tg[key]:
|
||||
tagline.append(tl[index])
|
||||
# [1:] means to omit hashtag from dir name
|
||||
wp = True # will paginate
|
||||
if new_posts is not None \
|
||||
and len(tagline) > cfg["page"]["postsperpage"]:
|
||||
wp = False
|
||||
updated += writepage(
|
||||
tpl, tagline, tc, cfg["page"], \
|
||||
subdir="tags/%s" % key[1:], \
|
||||
paginate=wp)
|
||||
if "webring" in cfg:
|
||||
if cfg["webring"]["enabled"] == True:
|
||||
export_profile(
|
||||
len(p), p[0].get_epoch_time(), cfg["webring"] )
|
||||
if not skip_fetch:
|
||||
fellows = get_webring(cfg["webring"]["following"] )
|
||||
if fellows != []:
|
||||
updated += writepage(
|
||||
tpl, fellows, tc, cfg["page"], subdir="webring")
|
||||
with open("updatedfiles.txt", 'w') as f:
|
||||
for filename in updated:
|
||||
print(filename, file=f) # sys.stderr)
|
||||
if "latestpages" in cfg:
|
||||
for page in cfg["latestpages"]:
|
||||
print(page, file=f)
|
||||
try:
|
||||
main(*get_args())
|
||||
except KeyError as e:
|
||||
traceback.print_exc()
|
||||
print("\n\tA key may be missing from your settings file.", file=sys.stderr)
|
||||
except dateutil.parser._parser.ParserError:
|
||||
traceback.print_exc()
|
||||
print("\n\tFailed to interpret a date from string..",
|
||||
"\n\tYour file of posts may be malformed.",
|
||||
"\n\tCheck if your file starts with a line break.", file=sys.stderr)
|
||||
except toml.decoder.TomlDecodeError:
|
||||
traceback.print_exc()
|
||||
print("\n\tYour configuration file is malformed.")
|
||||
except FileNotFoundError as e:
|
||||
traceback.print_exc()
|
||||
print("\n\tA potential cause is attempting to save a file to a folder that does not exist.")
|
@ -1,11 +1,30 @@
|
||||
|
||||
import sys, subprocess, getpass, pycurl, urllib.parse
|
||||
import sys, os, subprocess, getpass, pycurl, urllib.parse
|
||||
if __name__ == "__main__":
|
||||
def get_proxy():
|
||||
proxy = ""
|
||||
if "http_proxy" in os.environ:
|
||||
proxy = os.environ["http_proxy"]
|
||||
elif "https_proxy" in os.environ:
|
||||
proxy = os.environ["https_proxy"]
|
||||
host = proxy[proxy.rfind('/') + 1: proxy.rfind(':')]
|
||||
port = proxy[proxy.rfind(':') + 1:]
|
||||
foo = proxy.find("socks://") >= 0 or proxy.find("socks5h://")
|
||||
return host, int(port), foo
|
||||
|
||||
def api_upload(endpoint, dest_fmt = "/microblog/%s"):
|
||||
pages = []
|
||||
with open("updatedfiles.txt") as f:
|
||||
pages = f.readlines()
|
||||
c = pycurl.Curl()
|
||||
|
||||
if "http_proxy" in os.environ or "https_proxy" in os.environ:
|
||||
hostname, port_no, is_socks = get_proxy()
|
||||
c.setopt(pycurl.PROXY, hostname)
|
||||
c.setopt(pycurl.PROXYPORT, port_no)
|
||||
if is_socks:
|
||||
c.setopt(pycurl.PROXYTYPE, pycurl.PROXYTYPE_SOCKS5_HOSTNAME)
|
||||
|
||||
c.setopt(c.URL, endpoint)
|
||||
c.setopt(c.POST, 1)
|
||||
for page in pages:
|
Loading…
x
Reference in New Issue
Block a user