Teaching Your Computer to See

Google is doing it again, not satisfied with self driving cars now they’re trying to teach your iPhone to see. I found this article on Forbes.com, called “Teaching the iPhone to Drive.” Basically what the article is saying is that camera technology is getting to where it can see better than the human eye however, a computer cannot process the data without human interaction, it still needs us to tell it what it is seeing. According to the Author of this article however that could soon change, and we’re already seeing steps in that direction.

The author calls it the “visual singularity” the point where computers can see better than we can, and according to him it’s fast approaching. To an extent its already here in specialized formats, one thing he mentions is the LIDAR system in Google’s autonomous cars. However, that system is extremely expensive and too large to be practical except for what they are using it for. Also, there are license plate and facial recognition systems, but those are specialized, they are good at what they do but nothing else. What’s holding the technology back is that being able to see and processing what you see is a ton of information and right now computers can’t handle it. This is where the guys at Google come into it. 

Basically they figured out that it was the internet with its mass of online texts and digital sounds that taught computers how to read and understand sounds (Siri), so they networked a bunch of processors(16,000), and set them loose on YouTube. The machines ran for over a week and “looked” at millions of images. The results? The network taught itself to recognize cats. Why cats? Have you ever seen how man cat videos there are on YouTube? Now taken out of perspective it doesn’t sound all that impressive, but it’s a huge improvement over anything else that has come along. So, to make a long article less long, how does this affect us? One thing the article mentions is Medicine. According to the author coupled with advanced diagnostic programs that are already being developed, doctors may be out of the job. What other ways do you guys think this could be used? Should it be used, computers have been known to have errors? How will it affect us if/when computers are literally able to do everything we can, and do it better?

The Final Blow to Internet Privacy?

The internet isn’t well known for being private. Major companies like Google and Facebook have been tracking users all over the place for years, as well as mining data from all corners of the internet to sell to advertisers. But even Google and Facebook don’t know everything about your web activity. It’s still kind of private-ish. However, thanks to AT&T and a number

of other internet service providers, the last bits of privacy we currently enjoy are about to be dissolved.

Earlier this year, all of the major ISPs in the United States (AT&T, Verizon, Comcast, Cablevision, and Time Warner Cable) announced that they will be rolling out a “six strikes” plan to crackdown on copyright infringement. Basically, the ISPs will penalize anyone they catch pirating stuff online, and the penalties will become more severe with each strike.  But what is disconcerting about the initiative is how they plan on catching pirates. Reports from earlier this year have revealed that they will be utilizing deep packet inspections on a massive scale, which means that they will be keeping track of absolutely everything that you do online. (A real-life corollary to a DPI would pretty much be someone following you around 24/7 and filming everything you do, everywhere you go.) It appears that they also have databases to store the information they collect; however, no information has been released on how much of the data they will store or for how long.

They may catch a few pirates this way and deter a lot of future ones. On the other hand, most serious infringers (at least the ones with brains) will probably just start using VPNs and keep pirating away. If the people they’re trying to catch are just going to find a way around it, the result of the ISPs’ program will still be massive-scale surveillance, but only of their innocent, law-abiding customers who aren’t doing anything wrong. So what do you think? Will this initiative be effective? If so, is it worth sacrificing what’s left of our privacy for?


The Beauty of Google Documents

Hello gang, with Milestone III coming up I thought it appropriate to use my blog post as an opportunity to pay homage to Google Docs. Google Docs is a free Java-based web application that allows you to create documents and upload them to the cloud for storage online. The service is initially free up to a memory cap of 10 GB, after which monthly payments are necessary. This in and of itself is uninteresting – there are many services that provide this kind of data storage, such as Dropbox.

Google Docs is unique for being a web 2.0 technology for its ability for the document creator to share his/her work and collaborate in real time with others. This means multiple people can be working on the same document in real time and have their additions appear as they write them. This feature is a boon for students, as anyone with even a basic computer can access the internet and write text. I have had many a study session with other students by uploading our professor’s study guide, and having everyone fill in the information they know best. It also works wonderfully for projects (such as our upcoming Milestone).

But how did Google Docs come to be? Its origins are in fact two different products, Google Spreadsheets and Writely. Google spreadsheets was a simplified version of the current Google Docs, limited to creating data spreadsheets. However, Google’s purchase of Upstartle in March of 2006, the startup that created Writely, was the major jump into creating the present product. Writely carried the feature that defines Google Docs today – collaborative text editing. Four years later, in March of 2010, Google purchased DocVerse, allowing full compatibility with Microsoft office. Last year, offline viewing was made possible by a web app that automatically uploads your content once connected. The present day Google Docs is a clean, multi-functional product that is fully compatible with almost all common file types.

What have your experiences been with this product, or similar collaborative text editors? What do you think could be done to further improve it? Personally, I think adding speech to text would make it the perfect product. What are your thoughts?

Free College?

How would you like to take college classes online free of charge? There is a new wave of courses in higher education called MOOC’s, or massive open online courses. These classes are open to thousands of students, all around the world, for free. All you need is internet access. Of course, there are some schools that require fees that would need to be paid in order to receive credit, so it’s free, but still kind of not free. There are pro’s and con’s to offering these open online classes. Obviously, it is good to offer education to those around the world who have previously been discluded because they couldn’t afford it. This would allow anyone with an internet connection to have access to a college education. There are however some problems with these courses. One professor at Princeton, Mitchell Duneier, teaches a MOOC Sociology class that has 40,000 students enrolled. Yes, 40,000! Could you imagine trying to grade assignments for 40,000 students? Or having enough office hours to meet the needs of 40,000? For one person, there is simply not enough time to address every question from 40,000 students. 

A system has been set up to deal with these problems. For questions on the discussion board, since there is no way the professor could answer every single question, the students can vote on which questions are more important to be answered. In addition, as far as grading goes, a system has been set up where assignments are graded by multiple peers and an average is taken for the final grade. But this poses some obvious problems. What if students do not take grading seriously? Or what if some students grade harsher than others? There are still some obvious problems with this system.

Since we are learning about design, I am curious to know what you guys think would be the best design for classes like this? How do you think the system can be improved? Do you think that these classes are going to be good or bad for the educational system?

Here’s the link to the article:


Wow! The Design of an Everyday Thing!!

So here we are, racking our brains to come up with a new technology (or an improvement on a technology that already exists) and we’re like, “man, this is harder than I thought it would be,” or, “man, I already have a gadget that does that…”. But now, I’m like, “man, I wish I would have thought of this!” Google Books engineer, Dany Qumsiyeh presents this video about his brand new design of a page-turning, digital scanner that converts paper books into completely digital books!

Let’s take a moment and relate this back to all our class discussions regarding Norman’s The Design of Everyday Things. Now, first things first, I know there is much, much more that needs to go into the development before this page-turning, vacuum scanner hits the market. Let’s keep that in mind, but let’s still talk about how well designed this scanner is at first glance. Norman presents this whole idea about affordances — the perceived and actual properties of the thing. He argues that affordances of an object are perhaps the most fundamental properties that tell the users how it operates. Let’s look at the affordances of this scanner: because of its prism shape, there is really only way to set the book on it! And I’m sure there is a button (or two) to tell the machine to start and stop, but assuming those are straight forward, the user doesn’t have to do anything while the machine flips the pages and scans the content of the book. When we look at the scanners we’re using right now, we have to turn the pages ourselves and worry about the orientation and the margins, and — ugh, it just becomes so inconvenient!! What do you think about the design? Is it as clear as a glass door (that’s funny because if it’s a glass door, there isn’t a way to tell if you should push or pull and so it’s really not ‘clear’ at all)?

This machine is awesome! Nowadays, paper is obsolete and, dare I say, forgotten. Everything is digital! I was already complaining about how inconvenient it is to flip the pages myself, so I won’t go there again. Dany claims that the machine involves a 40-second set-up! 40 seconds! What do you guys think about it? Is it really that big of a break through? Is it designed well enough (according to you or Norman) to make it big time in the Market?

Here’s one last thing: the best part of all of this remains that all these plans are open sourced with open patents, meaning even you guys can experiment and expand on it. Milestone 3 idea, anyone???

What is this Madness??

Just a few days ago I stumbled upon an interesting article from the NY Times, titled “Hurricane Sandy Reveals a Life Unplugged.” I thought to myself, wow this would be perfect for a blog post! I remember discussing, either the first or second week of class, what it would be like if all of the sudden all of our technology just shut down. An important question that came up was: would society be able to function without the technology that is so embedded in our daily lives?

This article offered a perfect glimpse of what life without technology would really be like. As we all know, the destruction of the hurricane completely wiped out a lot of the East Coast, taking all the power and energy with it. This meant that TVs, cell phones, the Internet, video games, etc. were all rendered useless. Thus, people were given a rare glimpse of what life would be like in a world where technology isn’t the vein of our existence. In the article, one of the paragraphs describes a family where the three children are infatuated with the mother’s iPad. The mom depicts the blackout experience as a form of rehab. She says, “It’s like coming off drugs. There’s a 48-hour withdrawal until they are not asking about the TV every other minute.” Some people just simply did not know what to do with themselves, they struggled to find meaningful things to do with all their free time. Conversely, some people found great uses for their time by catching up with family, exploring new hobbies/talents, etc.  

While many families relished at the time they had to spend with each other technology-free, they also found it difficult at times. The author writes, “among the parents who spoke with pride about newfound family time when their children were forced offline, there were honest admissions about the joy-kill of too much bonding.” This raised any interesting point for me. Do you think that people are so used to immersing themselves in technology that when it comes down to one-on-one personal time with actual people we get frustrated/annoyed/bored more easily?

Overall, I am fascinated by the idea of how our society would function without technology. In particular, how relationships would change, for the better or for the worst, without it always readily accessible. Do you guys think families should devote one or two days of the week where no technology is allowed? Would this help children (and adults alike) to learn that it is still possible to exist without the Internet or without a cell phone if need be. What are your thoughts on the article? 

In with the Bad, Out with the Good

Going off our reading due Wednesday, I wanted to pay special attention to memory and what it entails. Memory is a fascinating concept. Some people are able to store loads of information in their heads while others struggle even remembering what they had for breakfast the day before. In particular, I want to focus on internal knowledge and how people use it as a “memory bank” essentially. Some memories are kept throughout a lifetime, and some only last half a day until they are thrown out and never to be remembered. Basically, as Donald Norman writes in chapter three, “knowledge in the mind is ephemeral: here now, gone later” (80). But what exactly are the kinds of memories that people retain? Is it possible to know which ones we will keep longer than others?

A NY Times article titled “Praise is Fleeting, but Brickbats We Recall” might just have an answer to those questions. It suggests that, based upon research, people tend to remember more negative events and that they hold much more weight in our brain’s capacity. This is based upon both physiological and psychological reasons. Positive and negative information are handled in separate hemispheres of the brains; negative information is processed more systematically, which we tend to reflect more upon. There are also signs of this occurring in animals as well.

Bad events, therefore, take more time to wear off than good ones. In interviewing adults up to fifty years old about their childhoods, researchers found that bad memories were more prevalent, even with people who said they had a happy upbringing. The article continues for several more paragraphs and is a very interesting read, but the main point that I want to bring up is that more often than not, people tend to remember bad events moreso than good ones. 

In relating this to technology, I often ponder if this is why violent video games can have an effect on making kids more violent as they get older. If bad events are more likely to be processed in the brain and committed to memory, wouldn’t it make sense that violent, gory, video games could impression the brain to be more malicious? What are your guys’ thoughts on all of this? Do you believe that bad events tend to be remembered more easily than good ones? And could these bad memories, like violent video games, cause the brain to react in such a way that makes the person more violent and aggressive? 


Do you know about the new “Secret Boards”? If not, you may be missing out!

Pinterest is one of my favorite online activities! To those of you who have never used it here are the basics:


Who uses Pinterest? According to ComScore, about 30% of users are 25-34 years old and about 80% are female. In addition, about 60% of Pinterest users have earned a college degree and about 60% of users live in a household with an income of $25-75K.

You may be asking yourself, why is Pinterest so popular? First, Pinterest visually stands out. It is like a visual bulletin board for the Web. Second, it thrives on beautifully simple images of ideas

 groups together on a board of a user’s page. In addition, users can follow all of a user’s board or just a single board. Lastly, you can view or locate boards based on a subject, topic, or them. Some popular searches are crafts, gifts, fashion, interior design, and holidays.Image

So what about these new “secret” boards? Well a recent article, “Pinterest Secret Boards Keep Your Pinning Under Wraps,” announces Pinterests new “secret boards” that are only viewable to the user and not to the public or the users friends. Prior to this change, what you pins were viewable to the public and also to your friends. Now, just in time for the holidays, you can pin gift ideas, party ideas, and anything else you don’t want others to see.  

Well who cares about these “secret” boards? Large and small companies get A LOT of referral traffic from Pinterest. In fact, Pinterest is the new leading referral traffic generator for retail brands. You can bet that Pinterest will be going crazy around the holidays mainly because of those popular topics I listed earlier (crafts, gifts, fashion, interior design, and holidays). Also, a lot of other online social networking sites have seen the growth and momentum of Pinterest and are working to add some of the same features to their sites. 

BYOD: a Right or a Privilege?

It used to be that there was a certain amount of pride in having a company-supplied computer or cell phone. Nowadays, for those I know with this perk, it just means having two computers or two cell phones. It seems that company devices are unnecessary in a world where every competitive employee already owns a computer or smartphone, something which certainly was not always the case. So what’s the big deal with BYOD (Bring Your Own Device)?

A sample of the future workforce, college-educated employees ages 20-29, were surveyed regarding this issue. Their feelings regarding BYOD were overwhelmingly strong– the workers felt it was a right, and not a privilege, to utilize their own devices at work. In fact the workers answered that, regardless of company policy, they currently engage their personal devices at work. 1 out of 3 said they would break company policy to do so.

So why are companies against BYOD? Well for one, with employees engaging their personally-owned devices, companies lose control over the IT hardware and how it is used. How does a company tell an employee what they can and cannot do with their personal devices? The lines inevitably become grayed. Security of company data is also an issue. The same rules must be followed with personal devices as when using company-owned devices, but when an employee is let go, retrieving the company’s data becomes trickier.

This considered, two-thirds of the young workers surveyed believed they should be responsible for the security of devices used for work purposed, not the company.

Are these employees simply being selfish, or is there something to BYOD? CIO.com says there are benefits. The most obvious benefit being the money saved–up to $80 a month per user. With BYOD, the users cover most, if not all, costs related to their devices, and in most companies with BYOD policies, they report being happy to do so. This is most likely because of the second benefit: employee satisfaction. Workers chose their personal devices themselves, and usually for good reason. Therefore, they are much happier to use a device of their personal choosing than one chosen for them by the company, with which they may or may not find themselves compatible.

Users are also more likely to be frequently updating their personal devices, keeping the company on the cutting edge of technology. With BYOD, the company benefits from the latest technological features without having to constantly update each device themselves–or foot the bill. For users, this is also less hassle, as many company updates are slow and tedious.

Could it be that what on the surface seems a selfish demand of young employees, could actually be mutually beneficial to them and their employer?

Give me back my iPhone, Grandpa!

Also, get off Facebook and don’t say LOL, ’cause you’re old and old people just shouldn’t.

More and more recently, that old man meme about how Grandpa can’t understand iPhones, Linux, or the cloud is showing up more and more often. Steven J. Vaughan-Nichols of computerworld claims that the joke is becoming “increasingly irrelevant.”

The article, Grandpa the programmer, argues that older people (baby boomers) are just as competent in using new technology as are us younger folks. I know I’m asking you think way back to the beginning of the semester, and I know how hard that might be, but bare with me please. We spent more than one class discussing the notions of technology immigrants and technology natives, where we labeled those people born into the Digital Age the natives and those who have, at a later point in their lives, adopted new technologies the digital immigrants. If I remember correctly, we (yes, me too) argued that it was practically infeasible for technology immigrants to adapt to the Digital Age environment entirely. Now, jump to the reading for this week (if you read it). In chapter one, Norman argues, to some extent, that digital immigrants have a hard time adapting to new technologies. He gives several charming anecdotes regarding this idea that when new, more intricate gadgets come to the market, people, particularly older people not accustomed to the maturation of technology, just don’t know how to work them! These seemingly true stories magnify Norman’s persuasion and credibility. I mean, I totally want to believe him that the maturation of technology is growing at a speed that digital immigrants just cannot keep up with, right?

But now, we have Vaughan-Nichols writing in plain contrast to this idea that has been brought up time and time again. I think we were so into telling our own stories about our own grandmothers and grandfathers not knowing what to do with an iPad that we didn’t even think about the age of the creator of the iPad. Apple is seemingly the leader in producing brand new, state of the art technologies — probably the most popular gadgets that old people can’t figure out. But the CEO of Apple, is no spring chicken! He’s plenty old enough to be a grandfather and he must understand technology in order to develop such innovative ideas and successfully bring them to the market.

I understand that Vaughan-Nichols is talking a lot regarding the actual creation of the code, and that’s much different than just adapting to a tangible product. But didn’t we say last week that older people were more about Facebook because they didn’t have the time or the skill for all the HTML included in Myspace?

I’m not going back on my argument that it’s more difficult for people not born into the Digital Age to pick up a brand new gadget — mostly because my grandma asked me where the keypad was on my phone at dinner last Sunday and because my mom continually asks me for the meaning of those stupid text message abbreviations. But I think it’s super interesting to think about the creators of these technologies that us young kids are infatuated with. They could be my grandfather!

What do you think? Does the baby boomer generation understand technology and all it has to offer? Or were you right saying that they cannot ever entirely grasp new gadgets?