Monday, June 30, 2014

Management

Managing your resources (energy, time, and students) is a nontechnical topic, but nevertheless it is essential for your success in academia. There isn't much talk or guidance in these topics at the graduate school. You are expected to attain these skills on your own or maybe acquire them by osmosis from professors and colleagues.

Here I will keep it short, and just post my summary slides of 3 great books I read on management.

The first one is the seven habits book. This book is about managing yourself as an effective person. I first read this book around 18 and found it long, tedious, and boring. Reading it again at 38, I think the book has great advice.
Link to my summary slides on the seven habits book.

Getting Things Done (GTD) is the best book on time and project management with low stress. This summer, do yourself a favor: Read the book, and adopt the GTD system ASAP. You will thank me later.
Link to my summary slides on the GTD book.

I am not aware of much work on student/postdoc management, although this is very important for project management. The one minute manager provides a minimalist method for managing people. There are also some advice on the web (this scrum method is worthy of note), but if you can suggest well-researched time-tested approaches for student management, I am eagerly waiting. This is an area where I should learn more about.

Friday, June 27, 2014

Targeted crowdsourcing using app/interest categories of users

Part of my research is on crowdsourcing. Basically, crowdsourcing means performing micro-collaborations with many people to complete a task. You divide the task into microtasks and outsource it to people. They provide solutions to your microtasks, and you aggregate those to obtain the solutions to the microtasks, and then ultimately to your task.

Aggregating the responses from the crowd is a challenge of itself. If the questions are asked as open ended questions, the answers would come in a variety of types, and you would not be able to aggregate them automatically with a computer. (You may use human intelligence again to aggregate them, but how are you going to aggregate/validate these next level aggregators?)

To simplify the aggregation process, we use multiple-choice question answering (MCQA). When the answers are provided in choices, a, b, c, or d, they become unambiguous and easier to aggregate with a computer. The simplest solution for aggregation of MCQA is the majority voting: whichever option was chosen most is provided as the ultimate answer.

Recently, we started investigating MCQA-based crowdsourcing in more depth. What are the dynamics of MCQA? Is majority voting good enough for all questions? If not, how can we do better?

To investigate these questions, we designed a gamified experiment. We developed an Android app to let the crowd answer questions with their smartphones as they watch the Who Wants To Be A Millionaire (WWTBAM) quiz show on a Turkish TV channel. When the show is on air in Turkey, our smartphone app signals the participants to pickup their phones. When a question is read by the show host, my PhD students would type the question and answers, which would be transmitted via Google Cloud Messaging (GCM) to the app users. App users play the game, and enjoy competing with other app users, and we get a chance to collect precious data about MCQA dynamics in crowdsourcing.

Our WWTBAM app has been downloaded and installed more than 300,000 times and has enabled us to collect large-scale real data about MCQA dynamics. Over the period of 9 months, we have collected over 3 GB of MCQA data. In our dataset, there are about 2000 live quiz-show questions and more than 200,000 answers to those questions from the participants.

When we analyzed the data we collected, we found that majority voting is not enough for all questions. Although majority voting does well in the simple questions (the first 5 questions) and achieves more than 90% accuracy rate, as the questions get harder, the accuracy of majority voting plummets quickly to 40%. (There are 12 questions in WWTBAM. The question difficulty increases with each question. Questions 10, 11, 12 are seldom reached by the quiz contestants.)

We then focused on how to improve the accuracy of aggregation. How can we weigh the options to give more weight to correct answers and let them win even when they are in the minority?

As expected, we found that the previous correct answers by a participant indicate higher likelihood of being correct in this answer. By collaborating with colleagues in data mining, we came up with a page-rank like solution for history-based aggregation. This solution was able to raise the accuracy of answers to 90% for even the harder questions.

We also observed some unexpected findings from the data collected by our app. Our app collected the response time of the participants, and we saw that the response time has some correlation to correct responses. But the relation is funny. For the easier questions (the first 5), earlier responses are more likely to be correct. But for the harder questions, delayed responses are more likely to be correct. We are still trying to see how we can put this observation into good use.

Another surprising result came recently. One of my PhD students, Yavuz Selim Yilmaz, proposed a simple approach, which at the end provided as effective as the sophisticated history-based solution. This approach did not even use the history of participants, and that makes it more applicable. Yavuz's approach was to /use the interests of participants to weigh their answers/.

In order to obtain the interests of the participants, Yavuz had a very nice idea. He proposed to use the category of the apps installed in the participants phone. Surprised, I asked him how he plans to learn the other apps installed in the participant phones. Turns out this is one of the basic permissions Android gives to an installed app (like our WWTBAM app): it can query and learn about the other installed apps in the users phone. (That it is this easy is telling about Android privacy and security. We didn't collect/maintain any identifying information on users, but this permission can potentially be used for bad.)

Yavuz assigned interest categories to participants using Google Play Store's predefined 32 categories for apps (e.g. Books and Reference, Business, Comics, Communication, Education, Entertainment, Finance). If a participant has more than 5 apps installed in one of these categories, the participant was marked as having interest in that category. We used half the data as training set and found which interest categories produce the highest accuracy for a given question number. Then in the testing set, the algorithm is simply to use majority voting among the category which is deemed most successful for a given question number. Is this too simplistic an approach?

Lo and behold, this approach lifted the accuracy to around 90% across all level of questions. (This paper got the outstanding paper award in the Collaboration Technologies and Systems (CTS 2014) Conference)


Ultimately we want to adopt the MCQA-crowdsourcing lessons we learned from WWTBAM in order to build crowdsourcing apps in location-based recommendation services.

Another application area of MCQA-crowdsourcing would be performing market research. A lot of people in the industry, consumer goods, music, and politics are interested in market research. But market research is difficult to get right, because you are trying to predict if a product can get traction by asking about it to a small subset of people which may not be very relevant, representative. The context and interests of the people surveyed are important in weighing out the responses. (I hope this blog post will be used in the future to kill some stupid patents proposed on this topic ;-)

Monday, June 23, 2014

Writing versus Typing

Recently, there have been several high profile articles on how writing with pens is much better for the brain than typing. One article presented a study that found: If you write rather than type, you will learn and recall more of the lecture.

For full disclosure, I am a fountain-pen fan and I enjoy the elegance and beauty of writing with the pen. I like writing so much that I have been thinking about converting to a tablet solution (MS Surface Pro 3).

But, as I weigh my options, I cannot get myself to go for a tablet solution or a dual (laptop+tablet) solution. Typing simply knocks the socks off writing when it comes to productivity.

Writing with a fountain-pen has many drawbacks. First of all it is not digital. It can not be easily stored and archived. It is not searchable, and so is not available easily. Most importantly, the writing produced by the fountain-pen is not easily editable. So, this forces you to be extra careful for writing, and self-censor, and this kills creativity.

Let me reiterate this point. The fundamental rule of constructing prose is that you keep the creating (drafting) and editing functions separate. Since editing is hard when using a fountain-pen, you get cautious and blend editing into creating/drafting. And that is not kosher.

Writing with a tablet also has many drawbacks. It is digital alright, but its text (handwritten text) editing is still very clumsy. Copying and moving your handwriting around is harder than simply wrangling text in a text editor. Transposing words, inserting a word in between, deleting a sentence, etc., are hard. Moreover, the tablet is not refined enough yet to give the fountain-pen experience and simplicity. Even the small inconveniences/bumps make your experience unbearable and can keep you away from writing. The tablet is a compromise solution between handwriting and typing. And instead of offering best of both worlds, it tends to offer worst of both worlds.

Typing does not suffer from these drawbacks. The only drawbacks to typing are that the writing looks too uniform. But using a special font you can avoid this. I use the Apple Chalkboard font on my Emacs, and I like it a lot. The Apple Chalkboard font provide some visual differences between different parts of the text. Furthermore, Emacs makes editing text, searching, replacing, etc., very fast that I don't get bogged down when revising my writing.

On Emacs, using the Org-mode offers extra benefits for me. The outline mode is useful for brainstorming and organizing my thoughts and writing. Org-mode is also my GTD tool. I can easily track issues and ToDo lists inside my projects using Org-mode.

So, for me, the choice is clear: Using Emacs Org-mode on my Macbook Air. I am not even considering a dual (laptop+tablet) solution, because using two separate systems for writing inevitably leads to integration problems and complexity. However, I occasionally use my fountain-pen for brainstorming, which I enjoy a lot.