The Challenges of Creating Law for A Spectrum: Copyright and Open Source

Article I, Section 8, Clause 8 of the United State Constitution gives the Congress the power, “To promote the Progress of Science and useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

Lawrence Lessig states in his book “Free Content”  this clause of the Constitution involves a balance between preserving enough intellectual property rights to give   inventors enough incentive to devote time and resources to work on new inventions with the interest of the public to have a free flow of ideas that can build on each other to make a better society.  After talking about corporate interest have made copyright law lean heavily towards the personal property end of this spectrum Lessig gives his own solutions.  He favors going away from the all rights reserved copyright or no rights reserved model.  Lessig explains how Creative Commons  is used to refine Copyright law so creators still retain some rights of their choosing.    This allows other people to share, use, and remix the original content with attribution as the only requirement in most cases.

While I agree with Lessig’s solution that there should be more choice for authors in how they want to Copyright their material I do believe he is biased against publishers and big business.  For example he gives the example of drug companies not reducing the cost of life saving AIDS drugs for customers in Africa who cannot afford them otherwise.  Lessig says they do this because they want to protect their profits and don’t want to get called before congress and asked why do the same drugs cost less in Africa then the USA. (Lessig 257-265).  The only solution he provides is to let African countries buy the drugs at a cheaper price, which goes against the patent model used in the United States now.  He does not mention having the government subsidize poor African countries or poor people in the US to help them buy the drugs at the market price.  Again, I think Lessig’s overall point is still the best to have different Copyright rules for different situations but I think it is disingenuous to not point out solutions of groups that he may disagree with.

I also think Creative Commons is a great step forward but it is not perfect.  If Creative Commons as a non profit develop “Some Rights Reserved” Copyright then what stops another company from developing their own criteria?  Even if they are working under the same standards there may be confusion from readers because there are so many different symbols that each come from different organizations.  This may relate to the chaos that ensued after Andrew Jackson broke up the 2nd Bank of the United States (BUS).  Instead of having one consistent currency the end of the 2nd BUS spurred states and private companies to issue their own currency.  This created a lack of confidence in what currency was real and what was counterfeit (See A Nation of Counterfeiters: Capitalists, Con Men, and the Making of the United States by Stephen Mihm).   The government also archives of all the copyrights that authors register with them.  This is a valuable source for historical research.  It something is copyrighted with Creative Commons this does not occur.

The main point of Peter Suber’s “Open Access: The Book” is that academics really have no motivation to restrict their work by putting a price on it.  For over 350 years they have been receiving university salaries get benefits when their work is distributed and cited by others.  However, this has been overtaken by publisher’s motivation for profit, which makes them create paywalls for access to scholarly journals.  It is important to realize that even Suber admits that this relates to only scholarly work and is probably the minority of the total amount of creative culture being produced.  Musicians and artists depend more on copyright because they make their living off of the royalties of their work.  I would just ask if scholars have absolutely no interest in distributing their work through publishers then why do some still do it?  I think the discussion needs to be broadened to the legitimacy and authority of open access material.  As Kathleen Fitzpatrick makes clear in her book, Planned Obsolescence (NYU Press, 2011) authors still rely on print journals because they give more authority and legitimacy to the authors work than publishing the same work in an online open access journal.

Karl Fogel, Producing Open Source: How to Run a Successful Free Software Project” gives a great introduction to the history of open source software by talking about the creation of GNU/Linux system by Richard Stallman and Linus Torvalds. Stallman created the General public license that said the system could be freely distributed and any derivations of this software needed to be open access as well.  He also gives a very detailed how to guide on how people can create their own open source software themselves.  This book in itself is a good real world example of how open source software can work along with proprietary software.

The best part of Creative Commons is that it empowers individuals to make their own choices about how to balance the personal property rights with the free flow of information that promotes the common.  They are not just defaulting to the standard position, which inherently endorses the all rights reserved model, and they are actively opposing the pressure of big business to keep extreme copyright protection, which Lessig says goes against common sense.

2 Comments

Filed under US History

Digital Preservation

How do archivists and historians ensure that digital archival records are not lost to history?  As this article talks about the consequences for losing records can have a huge effects on people’s lives.  On the other hand one cannot get too pessimistic about the challenges of preserving digital records.  I, for one, could not think of any high profile case involving a major lose of digital records.  I just may not be informed but a Google search of “lost digital records” also did not turn up too many cases.  The cases that I could find, like this one, seemed like it was a result of a bad company instead of anything inherently wrong with digital records.

I do think that Kirchenbaum makes a good point when he says that digital records have a physical and symbolic property.  The physical property is stored inside the computer and it takes a computer’s hardware and software to transfer this into a symbolic image or text that actually has meaning for historians.  This is a big change from written records where the physical and symbolic property was the same. 

Like so many other readings, this week’s readings stressed that the preservation concerns for digital records are very similar to preserving physical records, including knowing the provenances of the record, determining its authenticity, and accuracy of the object.

Kirshenbaum also raised other interesting questions about how digital records should be preserved.  Should you keep them in the same format even if there is a risk that the old hardware needed to run the digital material will not be available in the future?  Should you preserve a file on a computer or the entire computer?  These are important concerns that also concern physical records.  For example, should you keep a tattered letter from the 16th century in its original form even though it could deteriorate quickly or make a digital copy of that letter? Should you preserve just this letter or preserve the quill it was written with, the room it was created in, and records of the town where the author was born? All of these records would place this letter in greater historical context.  Archivists have to make decisions on what to save and what not to save whether they are physical or digital records.

The Brennan and Kelly article showed that the preservation of records for history is not the top concern of the people who lived through an historical event.   This article did a good job of showing the work historians need to do to go out and search and ask for records when people do not donate them on their own.

This topic should make all historians think of how they can help preserve important records from the time they are created and realize that digital records have preservation challenges just like physical records. 

1 Comment

Filed under US History

Guidelines for Evaluating Digital History Scholarship

Reviewers should be named so they can be held responsible. The process should be open to anybody.  Everybody has their own perspective so it is better to get as many perspectives as you can.  Reviewer’s comments should not be moderated because the author does not need to listen to everybody’s.  Also if you did moderate content, how would you pick the moderators?  In many respects the guidelines for reviewing digital material are the same as printed material.  Please see below for some questions reviewers should ask.

  • How effectively does work utilize its source material?
  • What is its contribution to scholarship?
  • Who is audience for this work?
  • What are strong and week points?
  • Does author effectively prove their argument?
  • What is goal of the project?
  • Is this a worthy goal?
  • How do they use technology to meat this goal?
  • How well did their content meet their goal?

Reflection

It was interesting to see that even though open access review is a very different system it still has the same goals and same questions as a regular review. It is also interesting that digital history can have multiple project types using many different technologies. This should lead reviewers to think of the goal and to ponder whether it is a worthy goal or not, instead of just rejecting something because it is in a new medium. I also wondered if these varying mediums create more specialized field.  In NEH grants you have to write towards general non professional audience.  However, isn’t it better to have a person who knows technology doing the review because part of the evaluation needs to be on how well the person used the technology to meet their goal.  On the other hand if you are having only people reviewing work in their own specialized areas does this create too much small groups and lose the bigger picture?

Leave a comment

Filed under US History

Undead Publishing

I agree with Kathleen Fitzpatrick claim, in Planned Obsolescence, that academic publishing is in trouble.  She makes a convincing argument that the bad economy is forcing publishers to push for electronic works.  However, the academy still values the printed monograph to complete one’s dissertation or gain tenure.  What does this mean for academic publishing now and in the future?

There are both positive and negative effects of the gradual shift towards digital works.  It is interesting that Fitzpatrick found that when she posted her draft online she received a lot more reviewers than a regular closed peer review process.  However, she figured out that they usually did not read the whole book and the majority did not comment.  On the other hand, the smaller number of peer reviewers in the regular process read the whole book and could give feedback about the work as a whole.  However, she also shows that open review maybe better because the reviewer has to take the credit or the blame for their critique.  As opposed to a close review process it is much easier for the reviewer to commit academic fraud because they are anonymous.

Writing History in the Digital Age edited by Jack Dougherty and Kristen Nawrotzki shows another advantage of digital history that it increases the audience to previously underserved minority groups.  This may improve the scholarship that is being produced if these groups are being included.  This born digital book also shows how digital content can tie the sources with the work more closely together.

Writing History in the Digital Age makes a larger point that digital history changes history scholarship.  In the past, both the content and medium portrayed printed books as final complete works.  The books had a definite all encompassing thesis argument and they were printed on paper, which could not be changed. Digital History starts to get away from having a definite argument to go to a more fluid ever changing text. As Fitzpatrick said in her section “The Death of the Author” that even though it is exaggerated the author still has to give up some authority to the reader.  This form of scholarship also shows the underbelly of history.  It shows that scholarship is always imperfect based on the limited sources that one can find and it is never complete.

If publishing is going towards eBooks does the way people read them effect how authors right them? Should historians should create more popular shorter historical works for an Internet audience.  If e-publishing allows almost anybody to publish their work what factors should be used for peer review, selecting journal articles, and for awarding prizes?

A good way of thinking about all these issues is asking, what is the goal?  No matter what form publishing takes I think conveying unique, well-researched, factual ideas to others is always going to be the main focus.   One should not be overawed by new technology but they should think of what technology, old or new, best serves that goal.   Just because we have a tool to publish almost an infinite amount it does not mean we should.  As Fitzpatrick showed, even online journals with open access strive to give authority to the reviewers by making their names public and in some cases using the old blind peer review to weed out at least some of the less quality article proposals.

Leave a comment

Filed under US History

Practicum Reflection: Neatline and Humanities Mapping

I have to say creating a map with Neatline was extremely frustrating but did it did show me the value of spatial history.  I made a map of the prominent industrialists and financiers from the Gilded Age.  You can see my map by going to the following link Birth Locations of Major Gilded Age Figures

There were several difficulties I had using this tool.  I first tried to create a map in Google Chrome and the map did not show up.  After a while of playing with the settings I tried Apple’s Safari.  In this browser the map showed up but I could not save any of my new items.  After looking at the about us section and looking to see if there were any tips on the #ClioF12 twitter feed I finally ended up using Firefox.  I was finally successful in creating a map of the birthplaces of the major industrialists of the Gilded Age.  It was pretty fascinating because all of the people were born in the North except for Andrew Carnegie, who was born in Scotland.  Even more interesting was that most of the people were born in the state of New York.  This definitely shows that the North was more industrialized and really had more commercial power at this time compared to the south.    This experience led me to see the importance in usability in humanities digital history tools.  These website are probably not going to become hugely popular like Google so they are not going to be well known.  In addition, most of the people using them are probably going to be people in the humanities who are not computer experts.  This is why these sites should have clear instructions, be simple to use, and have visual orientations.  I believe other people looked to see if there was a YouTube instruction and could not find them.  This makes sense that a visual tool would have visual instructions.

I also used Google Earth to map important tourist sites in New York in preparation for a trip that my wife and I went on.  This was much easier and more intuitive to use.  It showed a good picture of the places we wanted to go to and the best route to take to be more efficient in seeing all the sites in the least amount of time.

2 Comments

Filed under US History

Reading Reflection: Spatial History, Another Historical Context

The assigned readings gave a good argument for why the “Spatial Turn” in history is important in addition to the “Linguistic Turn” of recent historical scholarship.  The most important point I got out of these readings was that spatial history is more than just a tool and the historians that employ spatial history are more than just “technicians.”  Spatial History is another way of looking at the past that enables historians to ask new questions and come up with new answers.  This field of history is similar to other digital history areas in that it is collaborative, open source, and is more effective with a large amount of data.  Richard White did an excellent job of giving a clear explanation of Spatial, Representational, and Absolute space.  Spatial history is thinking about different forms of space like going from your bedroom, to the bathroom, to the kitchen.  Representational space is anything like maps or timetables that try to recreate space.  Finally, Absolute Space is physical space that can be recreated on a map as opposed to other types of representations of space.  For example, Bill Cronon’s “map” of the time it took to get to American cities over the course of several years.   Other readings show that you can overlay maps with other data like population, wealth, and sanitation on the same map to see if one can find any connections.

 

The “Place and the Intellectual Politics of the Past” reading showed how historians and geographers come at maps differently.  Geographers are interested in location for its own sake while historians are interested in maps and other visualizations for what it can teach people about the humanities.  It was interesting to see the strengths and weaknesses of spatial history.  Spactal history focuses less on the individual and more on overall trends.  However, it is less equipped for telling a narrative.  Another important point that was raised is that spatial history is determined by how you define your location.  For example, people were concerned about different things when looking at cities as opposed to nations as a whole.

 

After learning more about spatial history I could definitely see applications for it in my current studies.  I am reading a book about the Irish immigration after the Potato Famine and their effect on Liverpool and Philadelphia.  The author talks about many trends like poverty, sanitation, overcrowding, and less personal stories.  There are no graphs, charts, or any other visualization supplements. I think the book would be vastly improved if there were visualizations so that the reader could see the things that the author was writing about.

Leave a comment

Filed under US History

Practicum Reflection Week 6

This week as an introduction to text mining I experimented with Voyant and Google’s Ngram reader.   This confirmed my belief that close reading is still very important even if distant reading also has value.  I used Voyant with some books that I was familiar with like Frederick Douglas’s autobiography and The Wizard of Oz and some books that I was not familiar with like Ulysses by James Joyce.  This showed me that I could get a general idea of what the book was about by the top words shown.  For example, the top words for Frederick Douglas’s autobiography were “slave” and “master”, which give a pretty good general idea about what the book is about.  However, the Wizard of Oz showed the value of close reading since the top words were “scarecrow,” “Dorothy,” and “woodsman.”  If one had not read the book would they have any idea what the plot of the book was about let alone the allegorical meaning it has in terms of the gold and silver debates of the late 19th century?  The other tool that I used was Google’s NGram Viewer. I liked this better because it is shown over time.  So the effects on the word counts are affected by historical events rather than just contained in one book.  I am more familiar with historical events so I could get more out of this tool.  I tested some words with predictable results.  For example, the word “automobile” started to gain popularity at the beginning of the 20th century, peaked in the 1950’s, and then tapered off from their.  However, Google’s Ngram Viewer also brought up some questions I did not have the answers for, as Franco Morretti would say.  For example, the word “Irish” was not used very much except for the period’s 180-1820 and 1860-1870, when there was a huge spike in the usage of the word. This was interesting because the time of the Irish potato famine was 1845-1852 and this is a time when there was almost no mention of the word “Irish”.  One would think that this tragedy and the resulting mass immigration to America and elsewhere would cause more attention to be paid to the Irish but there is no spike until 1860.  Why is this?  Is it because most Irish did not immigrate until later? Were books not published about the famine until about a decade later? Or is it the result of something else? This definitely shows the value of text mining for its ability to raise new historical questions.

1 Comment

Filed under US History

Text Mining and Historical Scholarship

In Graphs, Maps, and Trees, Franco Moretti gives an exceptional introduction to text mining, which raises the possibilities and shortcomings of this new tool that are noted below.

Advantages

Text Mining gives us the ability to…

  • Analyze large amounts of data,
  • Search through more niche markets.
  • Recognize new patterns.
  • Asks questions we may not be able to answer.
  • Can break down text more and make them more searchable.
  • Can track how users view content to see what is important.

Disadvantages

  • Less focus on individual text
  • Can only give you representations of data not interpretation of data.
  • Tries to fit everything within one framework. What if there is no framework? What if life is random? What if the framework changes?
  • Less focus on politics because this is fleeting concern of the present.
  • Possibly takes away some of the unique historical context of each work.
  • There are more books that were made than the ones counted to make the graphs.
  • Publication numbers do not always reflect who actually read the books. For instance, more than one person could read same book.
  • Hard to explain novelty, uniqueness.
  • Takes away from human agency.

He makes an interesting point that the ability to analyze large amounts of data affects what type of scholarship gets produced.  These tools allow one to focus on the overall historical context and not just focus on the few major events usually studied in history.  He shows how these major events are usually connected to much larger patterns.  Historians need to analyze the strength and weakness of this source just like any other source they use.

Graphs, Maps, Trees: Abstract Models for Literary History by Franko Moretti

Sometimes, I think, the graphs give a false sense of objectivity to his data.  Moretti’s Literary Genre graph looks very objective showing the number of books in three different genres over time.   However, is there still subjectivity to this? For example, the classification of a book can be subjective if it is on the edge of two genres.   Moretti starts out by saying he is a Marxist and this text mining enables him to generate material that supports his ideological bias.  Burke had a good point when he said text mining does not do a good job at showing uniqueness and human agency.  I believe humans have free will, even if limited by their circumstances, and this is an important part of history.  Thus, text mining is a valuable tool for historians but historians should use many different tools and sources to gain the best picture of reality they can.  It is interesting that the fundamentals of the historical profession like having a variety of sources and analyzing the strengths and weaknesses of sources is still important even when discussing relatively recent advances in digital history.

1 Comment

Filed under US History

Practicum Reflection Week 5

Editing Wikipedia and Transcribing Papers of the War Department

Editing Wikipedia and transcribing papers of the war department gave me more appreciation for both of these sites.  The act of changing an entry on Wikipedia really shows that it is a crowdsourcing effort and anybody can edit its content.  Editing Wikipedia was hard because, as Rosenzweig said, the writing is poor.  It is also not your own writing so it is hard to fit your writing in with all the other text that came from other people.  It is also tough to tell where to include information because sometimes the article is not set up in the most coherent manner.  It was also challenging to write because it is talking about the past in a different way than historians usually write.  Wikipedia writers try to write just the facts whereas historians usually try to use facts to support their overall thesis.

The Preservation Society of Newport County: Newport Mansions

http://www.newportmansions.org/learn/the-gilded-age-revisited

Strengths

  • Very engaging homepage with a huge picture of a Newport Mansion with the main headings at the top of the page.  The visitor is not overwhelmed with information but they receive a general picture of what is on the site.
  • Excellent use of several different colors that are not distracting when one is trying to read the text.
  • The IA is very well organized.  There are headings at the top of the page and when one clicks on those there is a bar at the left hand side that gives you the subheadings within this broad category.  This gives the visitor a good orientation as to what part of the website they are in.
  • There is also a search bar for quick searching.
  • A printer friendly button is included, which helps printing and is probably also helpful for people with disabilities.

Weaknesses

  • The bar at the top of the page is helpful but there are 9 buttons on this bar.  They may want to cut down on the number of items to make the site more focused.
  • The site is not very interactive.  There are very little links or pictures and no maps.  In addition, the few pictures that are on the site are static so the visitor cannot zoom in on them.
  • The site is run by a preservation organization so there are many areas geared to selling items or tours.  This distracts from the content of the site at times.

Leave a comment

Filed under US History

Crowdsourcing: Facts vs. Perspectives, Week 5 Reading Reflection

 The Pros and Cons of Crowdsourcing

All the authors in this week’s readings gave a well-informed summary of the benefits and pitfalls of crowdsourcing. Their analysis and decision of whether to use crowdsourcing in historical organizations raises important issues relating to the theory and practice of the historical field.  Rosenzweig gives a great analysis of the strengths and weaknesses of Wikipedia.  I believe the argument comes down to getting the facts as accurate as possible but also using crowdsourcing to open up the historical profession to as many different perspectives as possible.  Historians should argue vehemently against objectively false information on Wikipedia like incorrect dates or sequence of events.  Historians should have enough courage in their profession to acknowledge that this false information should be corrected.  They should also realize the power and influence of Wikipedia.  They should not ignore it because other people will use Wikipedia if historians have improved it or not.

Sign for the Wiki Wiki bus at the Honolulu International Airport. This term was the inspiration for the name Wikipedia

Crowdsourcing is not perfect but still should be utilized by historians because it also has great potential to open up the historical field to many new perspectives as well as accomplish objectives quicker, easier, and cheaper than was ever possible before.  This is probably the most important part of crowdsourcing.  As the L. Sprichiger and J. Jacobson article shows, history is complex and does not just have one storyline but many storylines and perspectives.  Historians should embrace crowdsourcing because it makes it easier for many different perspectives to make their voice heard.  Crowdsourcing has its positives and negatives but so does any other source that historians use to conduct their research.  Historians should treat crowdsourcing as they would any other source by looking at the creators, the audience, the possible bias, and the historical context within which it was created.

Radical Trust

The History News article shows why historians should have radical trust in the general public to conduct crowdsourcing as long as the source is clearly identified and there are some basic ground rules that everybody must follow.  As long as the source is known then people can evaluate the authority and importance they want to give to the crowdsourcing project.  Ground rules, may be contested, but they are important so the organization does not enable people to just post hateful, offensive, or obscene items.

How Open Should Historians Be?

 While I agreed with the goals of the “Telling an Old Story in a New Way: Raid on Deerfield: The Many Stories of 1704,” website project I also think some of their accomplishments were overstated.  It was admirable that they shifted from a European centric perspective to showing the same event from each of the 5 different groups involved in the massacre.  However, the website creators still had to decide that the Native American’s historical context and perspective was legitimate and worth telling.  Even telling the story this way inherently left out other perspectives like a focus on women’s history, the technology at the time, or what role religion played in this event.  I also wondered how far this inclusion of other perspectives should go.  Should historians take other groups perspective into consideration no matter how insane, hateful, and violent they maybe?

Leave a comment

Filed under US History