Category: Technical
Posted by: seanmcox

As of the beginning of last month, Twitter removed support for their simple authentication mechanism, so I was forced to carry out my plans to upgrade to implement OAuth for authentication. Fortunately, Twitter linked to some helpful examples. Nevertheless, even with a library to help me along, it was fairly painful. (Part of that might have been due to the fact that I had a cold for the first part of the work, and a flu for the rest.)

Nevertheless, my auto-tweeting, and my site-integrated tweeting are now both functional. Hoorah!



Email to a friend
Category: Technical
Posted by: seanmcox

This piece is being written at the instigation of a few co-workers.

What Is a Wiki

A wiki is a kind of content management system for a website. As such it is a system which is designed to manage web pages (often referred to as articles).

A wiki generally allows some (often all) users to edit the content of pages in their browser real-time. User discussions are also typically facilitated for the purpose of enabling collaboration. (Not simply to allow them to express opinions passively.) As such, a wiki is a collaborative endeavor which is designed to leverage the experience of individuals who do not manage the wiki. (Sometimes management tasks can even be delegated to interested parties.) This is the primary and definitive feature of a wiki.

A wiki is typically organized with a predominantly flat structure. Though, a superficial substructure can form based on the way pages relate to one another. This organization means that related pages will be interconnected and the substructure is typically organic. It also means that you don't typically have to go find the page about "termites", because there's probably only one place for it to be.

A wiki facilitates interlinking of content pages. For intra-wiki linking, you typically don't have to remember much more than the name of the page in order to connect to it.

The Strengths of a Wiki

Since wiki's allow real-time in-browser page editing by a variety of users, content can evolve rapidly. Small articles can be created when little is known because there's no expectation that you have to "finish it now", or even that you have to finish it yourself. The expertise of a variety of individuals with diverse experience on a given topic can easily be leveraged.

Due to all the topically relevant interlinking, wiki's typically rank well with Google.

The flat structure is particularly handy when the pages to be managed can be named in fairly obvious ways, the content is topical in nature, and the domains of interest don't intersect too much. That is, wiki's are great for managing reference material.

Some Examples of Good Wiki Themes

An encyclopedia. (See: Wikipedia) Often these are themed or fan based.
Personal genealogy. (See: Cox Genealogy)
A dictionary. (See: Wiktionary)
Software Documentation (FAQ's, tips, supplimental documentation, signatures, constraints See: Wiki:NucleusCMS)

Some Examples that Don't Work

A meeting minutes repository: Since information in minutes is typically not topical in nature, interlinking becomes extremely non-intuitive if even practical at all (if even desirable). Page naming is probably best accomplished with a date and meeting title, which generally tells you little about the page's contents. All organizing and sorting must be done manually. In comparison, a simple file system's tree structure will usually automatically sort files chronologically, especially if named chronologically. A file system also provides the freedom to use whatever editor or file format one chooses, which is almost always easier than writing wiki markup. The one advantage a wiki might provide is the easy inclusion of auxiliary materials. A simple file system might even be easier for that purpose. Since minutes are typically not expected to be improved upon and revised, and since this is typically not desirable, minutes storage takes no advantage of the Wiki's primary and definitive feature.

A lessons learned repository: Lessons learned documentation suffers from almost exactly the same pitfalls. In short, a wiki is ill-advised for the storage of largely stand-alone and static documents. These may be included as source media, but they do not make good primary content for a wiki. A wiki would merely add a lot of overhead to the management of such a repository while providing hardly any benefit at all.



Email to a friend
Category: Technical
Posted by: seanmcox
For the past couple of weeks, I've been experimenting with Twitter. That is, I'm not really micro-blogging about my breakfast or anything like that, but I have been thinking about how to integrate Twitter into various parts of my online presence.

My initial reason for trying out Twitter was for the purpose of promoting the website for my father-in-law's paintings, and in that vein, my first bit of technical Twitter work was to create a simple form in his admin area where he (or I) could submit updates. I also installed a plug-in for the website's blog which posts an update to Twitter whenever a new item is posted.

I have other ideas for integrating Twitter into his website in a very automated way. However, as getting my father-in-law to help promote himself online is like convincing Ephraim to be quiet at church, I've really lost a lot of motivation for such updates.

Nevertheless, I've also had some thoughts regarding my own website and as kind of a foundation for future Twitter work, I wrote a little script in PHP, based upon some example code, which facilitates posting tweets. The only thing that I do with this right now is provide a form for those with a Cox Family account, so that they can tweet from my website. (Only Cassey and I have Cox Family accounts. A Cox Family account is also used to provide a personal feed reader, and login administrative functions.) I added the Twitter application to my Facebook account as well. I never used to update my Facebook status much, but I figured that as long as I was going to try Twitter, I could kill two birds with one stone.

My latest achievement, however, was to create a nucleus plugin that will tweet whenever I (or Cassey) blog, or even whenever we comment on a blog. Both functions can be turned on and off for each user of the blog software. I had to write my own plugin because I found that the only twitter plugins that existed for nucleus, were for the purpose of adding a recent tweet list to the sidebar. (Something I may want to do at some point... but maybe not.) Since the plugin doesn't already exist, it would make sense for me to contribute my work, which on top of everything else, is also an excellent promotional move.

One thing I've discovered while adding comment and blog update tweeting to my blog, is that one doesn't really want to import all of one's tweets to Facebook. Tweets are often of a very trivial nature and can occur very frequently. Importing them into Facebook, then, becomes a great way to annoy all of your friends with a barrage of minutiae they could care less about. I know that I'm not the only person to have made this discovery, as, searching for a solution, I found this was something of a hot topic. An old friend, Jonathan Howard, was also having similar problems at the same time.

Unfortunately, there really aren't any really good solutions at the moment. I found one filtering plugin, but it's not really very customizable and, even worse, it's not even working. (Twitter has a bug in their search tool affecting newer users, and the plugin seems to rely upon this search tool... not sure why.) Sounds like a problem just begging for a solution and an app that people will have a need for. (So, either a really good idea for a project of mine, or a near certainty that I'd be duplicating effort that is already underway elsewhere.)

Email to a friend
Category: Technical
Posted by: seanmcox
I was recently noticing some nifty new features in the MormonWiki.com website, so I decided it was time to upgrade my genealogy wiki.

I had originally installed version 1.11.0 of MediaWiki and taking a look at the MediaWiki website, I noted that 1.13.3 was available. The upgrade process wasn't too painful, except that their distribution is provided in a gzipped and tar'd package, which is typical for Unix and Linux, but unusual for Windows, which cannot handle such packages natively. To compound the problem it isn't possible to telnet in to my Linux web server and just unzip the file on the server easily.

In the end, I found that the problem required even more work that would typically be done from a terminal type of interface. My solution, however, was to just write a PHP script to execute the requisite terminal commands, or the equivalent. It worked beautifully.

I don't know if you'll notice any differences, but I'm rather pleased to have upgraded. I think I'll do some more tinkering with my script and upgrade the rest of my wiki's. The net result should be that for a standard upgrade process, all I'll need to do is make backups, drop the new delivery onto the server, and then run my script from the browser.

The backups will be a pain, but they're something I should be doing more often anyway.

Email to a friend
Category: Technical
Posted by: seanmcox
At Boeing, like many other companies, we have annual and semiannual career counseling and performance evaluations with our supervisors and one of the tools which they use to help us direct ourselves is the Business Goals and Objective. I switched managers midway through the year, and my new manager, as it turns out, has drastically different expectations than my previous manager had. So, among other things, my personal development goals got wiped out and I had to come up with something else.

Really, I think this was a good something else, but regardless, it set me behind schedule and one of my items still wasn't done before the Christmas break. The plan was to start experimenting with a new (to me) design pattern in which certain functionality could be made modifiable by a tool's user, run-time. More particularly, the user would be provided with a way to edit code, which would then be compiled and run on-the-fly when the function being edited was invoked.

Anyhow, the goal needs to be specific and measurable, and the goal was to write a little application that had a button and a text field. When the user clicked on the button, code was to be created which included the text in the text field, the code was to be compiled, and then the resulting class was to be run.

I came, I saw, and I conquered. Here's the class the does most of the interesting stuff (I'll see how readable I can make it):





package com.shtick.apps.dynamiccode.components;

import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.PrintStream;
import java.net.URL;
import java.net.URLClassLoader;
import java.util.Stack;

import javax.swing.JButton;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.tools.JavaCompiler;
import javax.tools.ToolProvider;

import com.shtick.apps.components.ErrorDialog;
import com.shtick.apps.dynamiccode.Info;

/**
* A good simple test line to compile is: System.out.println("Hello World!");
*
* @author Sean
*
*/
public class DynamicCodePane extends JPanel implements ActionListener{
private DynamicCodeFrame frame;
private JButton doItButton;
private JTextArea codeText;
private static final String ROUTINE_PACKAGE="com/shtick/apps/dynamiccode/routines";

public DynamicCodePane(DynamicCodeFrame frame) {
super(new BorderLayout());
this.frame=frame;

// Make a do-it button.
JPanel buttonPane=new JPanel();
doItButton=new JButton("Do It");
doItButton.addActionListener(this);
buttonPane.add(doItButton);

// Make a code entry field.
codeText=new JTextArea();
JScrollPane codePane=new JScrollPane(codeText);

this.add(buttonPane,BorderLayout.NORTH);
this.add(codePane,BorderLayout.CENTER);
}

/* (non-Javadoc)
* @see java.awt.event.ActionListener#actionPerformed(java.awt.event.ActionEvent)
*/
@Override
public void actionPerformed(ActionEvent e) {
if(e.getSource()==doItButton){
try{
// create/update class code
generateCode();
// compile class code
compile();
// run class
runGeneratedClass();
}
catch(Throwable t){
ErrorDialog.showError(frame, t);
}
}
}

private void generateCode() throws IOException{
// ensure that the needed folder exists
File sourceFile=getDynamicRoutineSourceFile();
File parentFile;
Stack pathStack=new Stack();
pathStack.push(sourceFile);
while(!pathStack.peek().exists()){
parentFile=pathStack.peek().getParentFile();
if(parentFile==null)
throw new FileNotFoundException("Parent directory of "+pathStack.peek().toString()+" can not be resolved.");
pathStack.push(parentFile);
}
while(pathStack.size()>2){
pathStack.pop();
if(!pathStack.peek().mkdir())
throw new IOException("Failed to create directory, "+pathStack.peek().toString()+".");
}

// create/update class code
FileOutputStream fos=new FileOutputStream(sourceFile);
PrintStream fout=new PrintStream(fos);
fout.println("package com.shtick.apps.dynamiccode.routines;");
fout.println();
fout.println("public class Routine{");
fout.println(" public void routine(){");
fout.print(codeText.getText());
fout.println(" }");
fout.println("}");
}

/**
* compile class code
*
* Reference: http://jgeeks.blogspot.com/2007/12/programmatically-compiling-java-program.html
*/
private void compile(){
JavaCompiler compiler=ToolProvider.getSystemJavaCompiler();
// compile
int compilationResult = compiler.run(System.in, System.out, System.out, getDynamicRoutineSourceFile().toString());
if (compilationResult == 0) {
JOptionPane.showMessageDialog(this, "Compilation is successful", "Compile Results", JOptionPane.INFORMATION_MESSAGE);
}
else{
JOptionPane.showConfirmDialog(this, "Compilation Failed", "Compile Results", JOptionPane.INFORMATION_MESSAGE);
}
}

/**
* http://java.sun.com/developer/JDCTechTips/2003/tt0819.html
* @throws Throwable
*/
private void runGeneratedClass() throws Throwable{
// get a fresh class loader
ClassLoader loader = new URLClassLoader(new URL[] {getDynamicRoutineRootFolder().toURI().toURL()});
// run class code
Class objectClass=loader.loadClass("com.shtick.apps.dynamiccode.routines.Routine");
Object object=objectClass.getDeclaredConstructor(new Class[]{}).newInstance(new Object[]{});
objectClass.getMethod("routine", new Class[]{}).invoke(object);
}

/**
*
* @return the full path to the folder containing the Routine class.
*/
private File getDynamicRoutineFolder(){
return new File(getDynamicRoutineRootFolder(),ROUTINE_PACKAGE);
}

/**
* The root package for the routine can't be in the same place as
* the root package for the application, otherwise, the
* URLClassLoader will load the class via the system class loader,
* which will cache the class preventing dynamic reloading.
*
* @return the full path to the folder which acts as the root to
* the package containing the Routine class.
*/
private File getDynamicRoutineRootFolder(){
return new File(Info.getRootFolder(),"routine");
}

private File getDynamicRoutineSourceFile(){
return new File(getDynamicRoutineFolder(),"Routine.java");
}
}






Email to a friend

26/09: Subversion

Category: Technical
Posted by: seanmcox
Thanks to David, I now have a version control system set up for my code.

For those who don't know, version control is like a wiki, for source code. (More accurately, a wiki is a system designed for doing version control for articles. Version control for code is older than wikis are.)

Like a wiki, a version control system allows various people to work collaboratively. Code is checked out from a repository to be worked on and changes are committed back to the repository when ready.

A history of changes is kept, allowing reversion to, and comparison with, previous versions.

On top of that, since the code is managed on the server and separately modified on a client workstation, it is effectively backed up on a regular basis, which is even better than a wiki. (Wikis require actual effort to back up.)

Anyhow, I've been wanting to set up a version control system ever since I first worked with such systems as an undergraduate programming for the college of humanities at UCR. Unfortunately, I can't install version control on the server that hosts this blog. It's not my exclusive server. It's a shared server, and I can't just install and uninstall things. I have to live with the permissions I'm granted and the services that are installed.

My office computer at home can act as a server. Not many years ago, I had a web server set up and was able to verify that an individual could find the web page by entering the IP address into their browser. Unfortunately, the IP address my office computer is assigned by my ISP is a dynamic IP address. For those who don't know, this is really the most common kind of IP assignment, and it basically means that I can't count on the IP address being a constant. Every once in a while, I may be given a new IP address. So, I can't use that IP address for a standard domain name, and I can't use it directly either with any acceptable kind of reliability.

The solution came from David in the form of Dynamic DNS. Dynamic DNS is a free service which provides a free subdomain that is designed to resolve to a dynamic IP address. My office computer, when it is on, is now theshtick.dyndns.org. In order to get it to work I had to install a little application on my office computer that watches the computer's IP address. Whenever the IP changes, the dynamic DNS service is informed and my subdomain is kept current. (At the time of writing, my office computer is turned off, to save power, but if I, or a friend, wanted to work on the code, I could just start leaving it on.)

Really, my computer isn't directly connected to my ISP. It's connected through a router. So, in reality, I had a static, but fake, IP address. Setting up Dynamic DNS required some router configuration, and in the end, my new subdomain only works from outside my home network. (Those little Verizon DSL routers are just stubborn sometimes.) From inside, my office computer has a name that I can use to browse to it. My sister, Ronni (who really needs a better public web page than the one I wont bother linking to), helped me verify that my setup was actually working.

In the end, it works, and I'm immeasurably thrilled.

Thanks David and Ronni. :-)

Email to a friend

31/07: The Template

Category: Technical
Posted by: seanmcox
On Tuesday night, I determined to work on my father-in-law's website. Since we have a kind of partnership right now, I suppose it's my website as well. However, I'll probably keep calling it his website. After all, it's named after him and centered on what is largely his work.

Anyhow, I determined to work on the website...

» Read More



Email to a friend
Category: Technical
Posted by: seanmcox
Some time ago I gave my mother-in-law some thoughts, from a physicists perspective, on the subject of global warming. Basically, I noted that temperature correlates monotonically with heat energy. (Meaning, as heat energy goes up, temperature goes up, and vice versa.)

So, if the Earth gets more heat energy it will get hotter. (OK, so that seems obvious, but even so, it's a critical point to make.)

There are two ways the earth can gain heat energy...

» Read More



Email to a friend
Category: Technical
Posted by: seanmcox
Yesterday I had an idea for my image editor... but I didn't really have time to implement it. I had a little time though, so instead I attacked an old but important issue my program has with saving GIF's.

The problem, I thought, was two-fold. The error I was receiving indicated that the GIF image writer couldn't save images with the ColorModel I was using. So, as I understood things, in order to save as a GIF, I would need to convert the ColorModel to a more GIF-like color model, and in order to facilitate this, I would need my program to have better handling of color models. A number of the operations my program performed would create a new image with a default color model and then inject modified pixels into it, then swap it out with the old image.

Yesterday, I updated the Image Editor to swap the pixels in an existing image directly, preserving the original ColorModel. As a result I discovered a couple of interesting things. First, the images that I'd read from files weren't using the color model I thought they were. (They weren't necessarily using color models that reflected the file structure at all.) Second, the GIF writer could work with these seemingly inappropriate color models. (On top of that, the Java-based GIF writer I was using appeared to be extremely intelligent in comparison with what I've seen in most image editors.) I'm not sure why it couldn't work with the color model I was forcing on the images before, but with things apparently working so nicely, I'm quite close to having an image editor that I'm not ashamed to make public. (Perhaps I'll be able to release over Memorial day weekend.)

With nit-picky little details like that taken care of, I'll be able to use my future efforts to build in more interesting features.

Anyhow, I was quite happy with how everything fell into place.

I'm hoping to devote some more time to my Quorum Secretary app. It's the sort of application that I think could really revolutionize home teaching and other aspects of the work of quorums, but it really needs a database upgrade, and I also need to develop a better layout manager to making printing more fluid. The layout managers I have available to me just don't generate layouts that translate well to printed material and the application's printing capabilities are terribly kludgey. I'm thinking of designing a layout manager that implements some of the design aspects of HTML.

Email to a friend
Category: Technical
Posted by: seanmcox
This morning I got a database connection error again, so mysql_pconnect() didn't work. That left only one more option before switching hosting. So, I backed up my database, created a new database, imported my backup into the new database, and reconfigured my blog's configuration file to use the new database.

The transition seems to have gone pretty smoothly.

When all is said and done, if there are no more problems, then it was officially my hosting provider's fault and had nothing to do with documented connection limits, or anything like that. (I'd probably like having 50 readers at once, but the statistics aren't nearly that high yet.) Since I worked in mysql_pconnect(), I should theoretically be able to handle those 50 readers now... at least, without database connection failures.

Email to a friend