Luminotes.com will shut down on March 1st. Read more.
How to trigger an html dump from cron?

How to trigger an html dump from cron?

I know it's not a tech support problem per se, I just didn't want to clutter the other forum with a topic most people weren't likely to be interested in.

Since part of what I'm using it for here is a network guide, it could be a bit problematic for someone to just walk in here and find that the cluster (or building) that held all the instructions was stolen or destroyed by fire or something :-)

I do pull a sql dump off-site daily, but that won't really help someone just walking in -- you know, if I get hit by a bus or leave the company or something -- and they need to know of its existence in the first place and how to restore it on a new cluster or plain box.

I did glance at Html_file.py and friends, but don't see an easy way to call them from a one-liner in a cronjob. I'd like to trigger an export of either a certain notebook or all notebooks to a predefined location for backup to another box somewhere on the network in another building. Privacy isn't at issue as they are shared corporate notebooks in the first place which exist only on their LAN.

grep -rin also found me Wiki.js for download, but I have zero idea how I could invoke that js and authenticate to it from a cronjob! Lynx/links and friends -dump likely can't be made to work, because of the js.

Any way around this, or is it just too much bother? Maybe there's another way to accomplish the same goal that you can think of?

p.s. I'm posting this from an old eee-pc in the server room. Luminotes is still usable and comfortable on this tiny screen.

​Re: HTML dump from cron

Unfortunately there isn't yet a good way to perform an HTML notebook download from an automated script, as Luminotes requires the client to be authenticated before any download can occur. However, here is something that you are welcome to try:
  1. Log into your Luminotes account from your web browser.
  2. Copy the session_id cookie that Luminotes sends to your browser. In Firefox, you can see this by going to Edit -> Preference -> Privacy -> Show Cookies, and then searching the list of cookies.
  3. Set up a cron job with your URL fetcher of choice (wget, curl, or something else) to download from the URL https://yourhostname.com/notebooks/export?notebook_id=yournotebookid&format=html
  4. When downloading, send the session_id cookie to the server. With wget, you can use the --load-cookies option to load the Luminotes session_id cookie from file. Note that the session will expire if unused for 72 hours.
That should do it.. in theory, anyway. This is completely untested. And since this isn't yet a public API, I can't guarantee that this will continue to work in future versions. At some point I'd like to develop a real official API to support this sort of thing.

Dan

​copying cookie

Thanks Dan! I'll play with that after lunch today. Sounds like a plan. Then I just have to hack a perpetual cookie in somewhere... :-)
edit: forgot to post back on my failure yesterday. I may have malformed the cookie though. Anyhow, I got a page warning about needing js turned on. For now, I'll add it to the monthly maint. -- right before drbd fail-over testing.

To write a comment, please login first. No account? Sign up to get a free account.

posts 3 total