this now works:
http://live.dbpedia.org/page/Landscape-Portrait
interesting ! it does not reference any media - so need to look into that -
I wonder if the ontology used has no attribute called media ? maybe this
is something that can be updated via the DBpedia community ? need to
check if this is the same for the archived/downloaded version - it seems
its mapping are not great - no media and lots of artist's names as fields,
bit confusing.
Wednesday, 23 November 2011
Sematic connection
Is there a free site that supports SPARQL queries ? I'll ask. Just thinking thorugh the process, it si the Semantic search engine where the 'connection' between different defintions/attributes of the same entity (N19 4EH) will be made. So piece code will be the linchpin that connects the two semantically -
habeas data
Came across this post from Andreas Maria on the UnlikeUs List, titled '(Almost) everything Facebook knows about me (IR3ABF)': (see below) strikes me that this relates to a static me, no the one roaming around and clicking and liking etc, and although this data is valuable, its value is increased exponentially when leveraged with network activities of this 'me', that is where the real money is.
{
"id": "732517354",
"name": "Agam Andreas",
"first_name": "Agam",
"last_name": "Andreas",
"link": "http://www.facebook.com/ andreas.maria",
"username": "andreas.maria",
"bio": "http://www.nictoglobe.com\r\
nhttp://burgerwaanzin.nl",
"quotes": "\"Facebook is built by drugs using spoiled middle class hipsters\"",
"work": [
{
"employer": {
"id": "111683558933850",
"name": "CDust Creative Engineering"
},
"location": {
"id": "111777152182368",
"name": "Amsterdam, Netherlands"
},
"position": {
"id": "144223702264260",
"name": "Executive Director"
},
"description": "Perceptual Research",
"start_date": "1991-01",
"projects": [
{
"id": "190399884378846",
"name": "Burgerwaanzin",
"description": "Digital Cinema & Radioshow on Amsterdam based Free Radio Patapoe",
"start_date": "2009-01"
},
{
"id": "185692004852015",
"name": "Friction Research",
"description": "Online series on the theory and practice of New Media Art ",
"start_date": "2007-01"
},
{
"id": "241244979270027",
"name": "Opera, Arbeiten, Works",
"description": "Artworks by A. Andreas",
"start_date": "1989-01"
},
{
"id": "204950576246315",
"name": "Nictoglobe",
"description": "Online Magazine for Transmedial Arts & Acts",
"start_date": "1986-01"
}
]
}
],
"timezone": 1,
"locale": "en_GB",
"verified": true,
"updated_time": "2011-11-20T01:08:13+0000",
"type": "user"
}
JSON result using Facebook's Graph API
{
"id": "732517354",
"name": "Agam Andreas",
"first_name": "Agam",
"last_name": "Andreas",
"link": "http://www.facebook.com/
"username": "andreas.maria",
"bio": "http://www.nictoglobe.com\r\
"quotes": "\"Facebook is built by drugs using spoiled middle class hipsters\"",
"work": [
{
"employer": {
"id": "111683558933850",
"name": "CDust Creative Engineering"
},
"location": {
"id": "111777152182368",
"name": "Amsterdam, Netherlands"
},
"position": {
"id": "144223702264260",
"name": "Executive Director"
},
"description": "Perceptual Research",
"start_date": "1991-01",
"projects": [
{
"id": "190399884378846",
"name": "Burgerwaanzin",
"description": "Digital Cinema & Radioshow on Amsterdam based Free Radio Patapoe",
"start_date": "2009-01"
},
{
"id": "185692004852015",
"name": "Friction Research",
"description": "Online series on the theory and practice of New Media Art ",
"start_date": "2007-01"
},
{
"id": "241244979270027",
"name": "Opera, Arbeiten, Works",
"description": "Artworks by A. Andreas",
"start_date": "1989-01"
},
{
"id": "204950576246315",
"name": "Nictoglobe",
"description": "Online Magazine for Transmedial Arts & Acts",
"start_date": "1986-01"
}
]
}
],
"timezone": 1,
"locale": "en_GB",
"verified": true,
"updated_time": "2011-11-20T01:08:13+0000",
"type": "user"
}
JSON result using Facebook's Graph API
Tuesday, 22 November 2011
Audit
It seems like a good idea to take stock of my recent foray into publishing landscape-Portrait data into the public realm. My first plan was to create several pages in Wikipedia, these would consist of a page about Landscape-Portrait, a page about the discipline of Digital public art and Digital Community Art and one about my own postcode N19 4EH. My plan was to link all these together, and hopefully the next time DBpedia (DBpedia is the Semantic Web mirror of Wikipedia) do an import of semantic data from Wikipedia, turning all the data into RDF LOD these pages would be included. Unfortunately several pages were rejected as a non remarkable concept - the only ones that survived were the LP and Digital public art page. That said I did add some of the video's about N194eh to the upper holloway page, so it will be interesting to see how ts works (there is a live SPARQL query @ http://live.dbpedia.org/page/Digital_Public_arts but it seems to be down at the moment.)
My next thought was to upload all the video content and metadata to Archive.org. Jeff from Archive has been very helpful and this seems a good solution for archiving the work, but it is not available as semantic data, it does however offer a fixed URI for the content so this could be utilised as part of RDF schema based around a postcode, also it allows batch uploads which Wikipedia does seem to offer to new users. With this in mind i have been looking at Freespace, a Google funded initiative which operates in a similar manner to Wikipedia. I created a page for my postcode N19 4eh and referenced the my own video portrait from LP using the fixed URI, however like Wikipedia the page was deleted. The plan now is to work out how to associate this resource with other URI resources which talk about the same entity - i.e. the N19 4EH postcode. I've posted various questions to OKN forums.... waiting for a replay.
I've also suggested an idea to have half hour online surgeries where practitioners, such as myself, can talk to an experienced practitioner about the ambition for a particular project and receive some guidance.
My next thought was to upload all the video content and metadata to Archive.org. Jeff from Archive has been very helpful and this seems a good solution for archiving the work, but it is not available as semantic data, it does however offer a fixed URI for the content so this could be utilised as part of RDF schema based around a postcode, also it allows batch uploads which Wikipedia does seem to offer to new users. With this in mind i have been looking at Freespace, a Google funded initiative which operates in a similar manner to Wikipedia. I created a page for my postcode N19 4eh and referenced the my own video portrait from LP using the fixed URI, however like Wikipedia the page was deleted. The plan now is to work out how to associate this resource with other URI resources which talk about the same entity - i.e. the N19 4EH postcode. I've posted various questions to OKN forums.... waiting for a replay.
I've also suggested an idea to have half hour online surgeries where practitioners, such as myself, can talk to an experienced practitioner about the ambition for a particular project and receive some guidance.
Monday, 21 November 2011
URI smilarity log
Trying to find out if there is an attribute of a URI which records some type of 'SameAsValue' - so different resource stores that relate to the same entity are in some way linked. If this is the case, then it would be possible to 'connect' different attributes, such as the geographical makeup of a postcode with the experience - in video say - of living there.
Video to text transcription
All the video uploaded to Archive.org will be accompanied by a spread sheet of metadata. I was thinking to add a field containing the spoken text within each video. Unfortunately most software based - automated audio to transcript convertors - are not of a good enough quality. This means that they are used in conjunction with hand made transcripts, such as commercial service SPeakertext.
Other non commercial academic software convertors come with a caveat concerning there accuracy, such as transana.
Other non commercial academic software convertors come with a caveat concerning there accuracy, such as transana.
Archive V's FreeSpace
By using Archive.org a permanent URL can be obtained for the video interviews from LP, these can then be referenced as attribute by RDF as part of a fixed resource for each entity (postcode) referenced by LP. The question is where to locate this resource, archive.org or some derivative of Google's such as FreeSpace.
Friday, 18 November 2011
Struggle
Really straining with the structuring of the final phase of LP. If I put the video content on Archive.org that's good - the metadata could be stored as a dataset on the OKN CKAN/data hub site or http://thedatahub.org/dataset/freebase. Freebase (owned/Housed by Google) would allow me to publish all the content and the metadata in one place, it can also be queried in a language like SPARKL, mmm choices. K
Thursday, 17 November 2011
Harwood
Whilst working away at this technical issue of open data, data in the public realm, access and accessibility there is something at the back of my mind which I am uncomfortable with. Its sort of generated by the embrace of 'openess' within Open data at a governmental level. How it is portrayed as an inherently good thing to push all this data into the public realm, which seems like an act of disavowal, kinda 'here take it, so I don't have to be responsible for it any longer'. This unease is amplified in an essay by artist Harwood,
some of this instrumental logic is exposed.
There is then a sense that this 'Openess' masks some other forms of systematic manipulation. In reading about Matta-Clarkes engagement with durational works, such as window blow out there seems some connection between the robust, irrefutable logic of 'Open Data' and the need to make works which are anything but, and in this way perhapsGovernment data produced under this notion of transparency can be viewed as operating the ventricles of an enlightened power, interconnecting the domains of government and population. The relative openness of the data can be seen as an attempt to unfold ârationalistâ attempts to evidence decisions. This transparency debate creates a protocol between government and non-government Database Management System administrators and ethical statistical analysts who summon the latent energies contained in the new knowledge to power their differing political factions. This is a data exchange between those who can already perceive data from its modes of representation or to put it another way understand the construction of the data and wish to exploit it as a form of self-reflexive critique of government.
some of this instrumental logic is exposed.
Archive.org wins
Finally getting some idea of how to locate the content from Landscape-Portrait
in the public realm, and hopefully rendering it accessible via the use of RDF
Linked data markup.
Just has an email from Jeff at Archive.org. There is a way to batch upload
data and metadata to the archive, what is better still is that there is a
static URi, so this content can be included within a structured publication
of Open linked data.
With this in mind I have stopped uploading content to WIkipedia, as I no
longer think it is the right place. My plan had been to upload new pages
for each of the postcodes featured in Landscape-Portrait but this has proved
an unsuccessful approach and extremely time consuming. Far better to
create my own linked data page for each postcode and feature the video
content their.
in the public realm, and hopefully rendering it accessible via the use of RDF
Linked data markup.
Just has an email from Jeff at Archive.org. There is a way to batch upload
data and metadata to the archive, what is better still is that there is a
static URi, so this content can be included within a structured publication
of Open linked data.
With this in mind I have stopped uploading content to WIkipedia, as I no
longer think it is the right place. My plan had been to upload new pages
for each of the postcodes featured in Landscape-Portrait but this has proved
an unsuccessful approach and extremely time consuming. Far better to
create my own linked data page for each postcode and feature the video
content their.
Monday, 14 November 2011
Wikipedia V's Archive.org
Finally loaded all my text video content into Wikipedia although how long they stay there is anyone's guess with my frustrating experience of having content deleted etc. What is apparent is that Wikipedia is not the place to upload this content en masse - Which is why i have uploaded the same interviews to Archive.org to think about using this as a store for the video, and then perhaps write a series of pages which locate and contextualize the video - and offer a fixed URI for it - all questions that I have posted to their forum.
Thursday, 10 November 2011
Entry deleted
I've submitted four articles to Wikipedia now - three have been either deleted or marked for deletion. Not sure I agree with Laurent Lanier when he calls Wikipedia 'faux authorative'. Seems very authoritarian even didactic to me.
Wednesday, 9 November 2011
Postcode as data object
Working through the tutorials for DBpedia its interesting that DBpedia page/resource for specific postcodes do not exist yet, for example N194EH. What needs to be done then is to link a new postcode page to a number of pages within Wikipedia, for example the page about Upper Holloway and the page about postcodes and north London postcode pages. I would then add the video portraits from Landscape-Portrait that relate specifically to postcode, say N194eh, and these would then be incorporated into the next DBpedia dump.
In the process of researching this I found some nice toys/tools:
http://www.visualdataweb.org/relfinder/relfinder.php
Browsers: http://dbpedia.org/snorql/?describe=http%3A//dbpedia.org/resource/Alexander_Marcus
Query Builder: http://factforge.net/sparql
In the process of researching this I found some nice toys/tools:
http://www.visualdataweb.org/relfinder/relfinder.php
Browsers: http://dbpedia.org/snorql/?describe=http%3A//dbpedia.org/resource/Alexander_Marcus
Query Builder: http://factforge.net/sparql
Tuesday, 8 November 2011
RDF linked Data
Just working my way through some RDF DBpedia tutorials and there is the start of some form of structure where by the video data from Landscape-Portrait might be published into the public realm. For example if postcodes are objects within DBpedia, then the video content of LP could be assigned to that name. Then when other users make use of that postcode name, they will be able to access the video content from LP along with descriptive elements such as what was the question being answered etc. Need to draw this out to make sense of it, and maybe run a query of the postcode and see what information is already assisgned to it.
Monday, 7 November 2011
Wikipedia Video Upload
Just finished uploading video from Landscape-Portrait to Wikipedia. It's a very time consuming process and prone to errors which force you to restart the process from scratch. It's taken my 2 hours to upload 18 video clips, including mark up etc. Kept getting errors for long file names, duplicate files names etc. There isn't (for newbie users like me) any batch upload facility which is annoying, anyways its done. The content I uploaded was my interview on the site so I made available within the public domain, where it does not have to be attributed and can be used in whatever manner. For other participants I'll probably use a sharealike attributable licence.
The next part of the project is to access this media through DBpedia using RDF Linked Data, now that might take a time.
The next part of the project is to access this media through DBpedia using RDF Linked Data, now that might take a time.
Landscape-Portrait - Bournemouth - Final phase
Trying to get the final phase of the Landscape-Portrait project of the ground, just written this overview of the next phase:
Landscape-Portrait. Final Phase.
The final outcome of the Bournemouth iteration of the Landscape-Portrait project will consist of the publishing, dissemination and promotion of audience generated content to the digital public realm.
The publishing of project content (video, text, data) will conform to guidelines outlined by the W3C[1] and the Open Data movement, where data and material is conceived of as ‘free to use, reuse, and redistribute’.
Specifically I will publish users video content using URI’s [2] (Uniform Resource Identifier) that locate the content in a fixed universally accessible manner. This procedure will be complimented by the publishing of related meta-data, which describes content using Open Data and W3C recommended schema - such as RDF and linked data[3] - consistent with the development of the semantic web.
Locating and describing content using a formally approved schema makes it possible to offer content to other agencies, practitioners, projects and audiences in a coherent and dependable way. This approach to data and material dissemination has been adopted at a governmental[4], public and private level. In making use of these practices within a public arts project, pertinent questions about arts engagement with use and legacy values are developed, further extending the conception of ‘durational’ public art practices.
Once elements of the Bournemouth project have been published and made available within the public realm there will be a requirement to promote this content. There are a variety of Open Data tools and services available for this purpose. It is an ambition of this phase of the project to encourage use of this content by governmental (for example local councils), public (charities, NGO’s) and personal (community activists, artist and residents) agencies and practitioners.
The final phase of the work will take approximately four days and will involve myself and other members of the original collaborative group in discussion about how to best achieve this phase of production.
The hoped for outcome of this phase will be the use of the video content produced during the Bournemouth installation by a range of entities, big and small, personal and public, cultural and civic.
[2] See: http://labs.apache.org/webarch/uri/rfc/rfc3986.html or http://en.wikipedia.org/wiki/Uniform_Resource_Identifier
Use URIs to identify things.
Use HTTP URIs so that these things can be referred to and looked up ("dereferenced") by people and user agents.
Provide useful information about the thing when its URI is dereferenced, using standard formats such as RDF/XML.
Include links to other, related URIs in the exposed data to improve discovery of other related information on the Web.
Wednesday, 2 November 2011
Semantic, RDF and Linked data.
Just writing a description of the final phase of the Landscape-Portrait Bournemouth project. In reading about the history of RDF as a subset of XML it seems to me, materially and maybe structurally, that there is a connection between the schema of RDF and the materiality of video.
For example the portrait videos in Landscape-Portrait do not work in a statistical fashion, rather each video text is a temporal descriptor, rather than an abstract fixed piece of data. In order to access it a coherent taxonomy, such as that outlined by RDF needs to be employed.
In essence the video functions as a container, much in the same way as XML/RDF is a language for describing content in a uniform manner, video is used as a temporal container of descriptive information, which might be accessed by RDF protocols and made sense of at a machinic level by using a semantic approach.
That said the video is also a signifier of a great deal of other information not quantifiable using a descriptive language such as RDF such as might be understood by Jameson's description of video as ‘a total flow’ of imagery, words, context. Overiding the hegemony of the linguistic medium' and perhaps this quote form Jameson points towards this machinic understanding:
'“Yet the involvement of the machine in all this allows us now perhaps to escape phenomenology and the rhetoric of consciousness and experience, and to confront the seemingly subjective temporality in a new and materialist way, a way which constitutes a new kind of materialism as well, one not of matter but of machinery.' (Jameson, Postmodernism, or, The cultural logic of late capitalism, 1991).
For example the portrait videos in Landscape-Portrait do not work in a statistical fashion, rather each video text is a temporal descriptor, rather than an abstract fixed piece of data. In order to access it a coherent taxonomy, such as that outlined by RDF needs to be employed.
In essence the video functions as a container, much in the same way as XML/RDF is a language for describing content in a uniform manner, video is used as a temporal container of descriptive information, which might be accessed by RDF protocols and made sense of at a machinic level by using a semantic approach.
That said the video is also a signifier of a great deal of other information not quantifiable using a descriptive language such as RDF such as might be understood by Jameson's description of video as ‘a total flow’ of imagery, words, context. Overiding the hegemony of the linguistic medium' and perhaps this quote form Jameson points towards this machinic understanding:
'“Yet the involvement of the machine in all this allows us now perhaps to escape phenomenology and the rhetoric of consciousness and experience, and to confront the seemingly subjective temporality in a new and materialist way, a way which constitutes a new kind of materialism as well, one not of matter but of machinery.' (Jameson, Postmodernism, or, The cultural logic of late capitalism, 1991).
Subscribe to:
Posts (Atom)