Lum, a connected device for toddlers.

Jihyun, Pedro and I worked on Lum, a connected device for toddlers.

Lum connects living room light and nightlight to automatically turn on and off by knowing the baby's schedule and also plays lullaby and other bedtime songs.

Research:

According to the National Sleep Foundation, toddlers ( 1-3 years old ) sleeps 12-14 hours per day.  It's important for toddlers to maintain a daily sleep schedule and consistent bedtime routine. Hence, they recommend parents to make the bedroom environment the same every night and throughout the night. It's important for babies to have their own rituals. Lastly, many children this age get disoriented and upset when they wake up at night in a dark room and can't see anything - shedding a little light on things can help tremendously.

Our hypothesis is that Lum knows the baby's bedtime and it will gradually turn off the living room light (or any main light) and start turning the Lum night lamp. Parents can log their baby's daily routine.

Function:

There are three modes:

  • wake up
  • nap time
  • bed time

We will be working on the physical lamp next week so stay tuned!  

Here is the working site: http://jihyunlee.github.io/lum

Screenshot 2014-02-13 17.35.44.png

We want Steve Jobs back

Apple has been struggling after Steve Jobs passed away. People are not giving them the same credibility as Steve Job's era. Makes me wonder; If Steve Jobs was still around, would we be more convinced about their newly released products? If "Tim Cook" gets replaced as "Steve Jobs" in every 2013 Apple related article, how would this transform our perception of Apple?

I started testing this idea by making a simple Google Chrome extension that replaces every "Tim Cook" to "Steve Jobs". Once you see the change in names, it gave a different frame of mind

Then I tried to replace Tim Cooks portraits to Steve Jobs (see below).

Would you think of current Apple products otherwise if Steve Jobs was vouching for them?

 

FastCo article about Tim Cooks in October 2013

 

Here are more screenshots of 2013 Apple related articles using my extension

Reincarnation of Tupac at Coachella festival 2012 was my inspiration.

2pac hologram

2pac hologram

EmojiOverload Chrome Extension

 "While an API is something that exposes specific ways of interacting with a system, shielding the system from direct manipulation, and extension is generally something that is integrated directly with a system." (Kyle McDonald and Lauren McCarthy)

This week, I created a Google Chrome extension for my friends who cannot live without emojis.

Emojis became one of the main language in digital communication.  Would it be possible to replace text we see on web with emojis and still understand the context?  How different would an emojified site feel?  This extension lets you to dive into the world of emojis. 

Check out my github for the code: 

See the results on different webpages: 

Click below to browse EmojiOverloaded site screenshots

CitiCycle: Sign Up for your first CitiBike Spinning Class!

Objective: Creatively misuse an existing API in order to reveal something about the service.

The API I chose to work with is CitiBike. Currently, CitiBike is sharing their stations, helmet store locations and branches. I wanted to get an API of individual bikes but they are not sharing that one yet.  

New York City's new bike share Citi Bikes are docked when not in use. Goldstein, whose self-deprecating moniker is "Fat Jew," started out only wanting to make a statement on how ridiculously expensive he says the city's spin classes have become; SoulCycle, for example, charges about $34 per class in NYC. He tweeted, "I'm seriously teaching SoulCycle classes for homeless people in NYC. I'm extremely fat but my core is strong." 

I agree with Goldstein, it's ridiculous how overpriced spinning classes are in New York City. Using stationed CitiBikes is a perfect way to get some free exercise.

Yes, it is embarrassing to do this on a popular docking station. We need to find a station that has a lot of bikes with less traffic. By going through their API, I can find a station that has maximum number of stationed bikes. Once I know the location, I can point it on Google Maps and "@citi_cycle" will tweet regularly informing about open slots for the next spinning class.

 

 

    Follow CitiCycle on Twitter @citi_cycle

    3-D Prints: Computation and Fashion

    I have been focusing on visualizing patterns of people's connection this semester. These are the 3-D prints of my Processing code that creates delaunay patterns with Kinect & Leap sensors. 

    3d_connection copy.jpg
    patternedlamp.jpg
    patternedlamp2.jpg

    Leap motion - connecting fingers

    I finally got to work with Leap and the results are really satisfying. Leap is very good with finger detection which is exactly what I want for my project. Previously, I used Microsoft Kinect for my project and it was rather slow and detected finger points not as exact as Leap.

    Here is my documentation of connecting the left and right finger points. I am currently developing this concept further with sound and visuals. Stay tuned!

    Working with Leap Motion

    Working with Leap Motion

    Visualizing hand interaction through delaunay patterns

    What is it?

    It's a interactive screen that makes delaunay patterns with your movement. You can save the frame at any point of time to make it into a 3-D product.

    Why am I making it / who is it for?

    Creating/visualizing invisible spaces between individuals more tangible and physical. 

    Visual demonstration of what I imagine it to be 

    I imagine it to be a Processing app that is projected on a big screen. Physical 3-D object will be produced later on with selected saved frames from the projection.

    Questions I am struggling with (conceptual or technical).

    Technically, the part where making a kinect sensor to capture two people is not working quite smoothly. I hope to create patterns that are visually clear with accurate points.

    Conceptually, I am not quite sure if this idea is enticing. I might need to have more user testing to see if this is a valuable idea. 

    handdelaunay.jpg

    A Spider Brings "Decaying Opulence" Fashion

    This is my final project idea #1 for this spring semester. I want to eventually have one or two 3-D pieces that are born from artworks created by Processing. 

    What is it?

    It's a interactive screen that makes delaunay patterns with your silhouette. Once it gets nodes from your silhouette, patterns will start to grow inwards from your outline to the center point of your silhouette. You can save the frame at any point of time to make it into a 3-D product.

    Why am I making it / who is it for?

    This is a different approach to making clothing. It's not about starting with a fabric or a material. My project starts with the person and clothes(patterns) gets made specifically for that person.

    Visual demonstration of what I imagine it to be 

    I imagine it to be a Processing app that is projected on a big screen. Physical 3-D object will be produced later on with selected saved frames from the Processing sketch.

    Questions I am struggling with (conceptual or technical).

    Technically, I am have two codes that I'm having a hard time to combine. I have a code that makes a pattern with Kinect sensor and another code that makes pattern grow with static .stl file that already has a triangular mesh.

    Conceptually, I am not quite sure if this idea is enticing. I might need to have more user testing to see if this is a valuable idea. 

    finalproject_ideasketch0.jpg
    3-D arm piece that is created by Processing delaunay pattern. The pattern was created by body silhouette detection with Kinect. Then it was save as PDF and translated in Rhino3D as .stl file.

    3-D arm piece that is created by Processing delaunay pattern. The pattern was created by body silhouette detection with Kinect. Then it was save as PDF and translated in Rhino3D as .stl file.

    I am in a process of making interactive voronoi patterns using Kinect. My concept around this is to visualize invisible connection between two people in a physical space. For now, I am just trying to see if Microsoft Kinect can capture my silhouette and make patterns with the nodes on my silhouette.

    I am in a process of making interactive voronoi patterns using Kinect.

    My concept around this is to visualize invisible connection between two people in a physical space. For now, I am just trying to see if Microsoft Kinect can capture my silhouette and make patterns with the nodes on my silhouette.

    Computing triangle repetition

    For me, it's easy to make textiles in Illustrator. I originally started coding this pattern thinking it will be a piece of cake. It sure wasn't as easy as illustrator! I learned to change my cartesian coordinates to polar coordinates, used sin() and cos(), made two classes have different repetition. What an enthralling beginning of a project! I am glad I got my system laid out. Very excited to play with images inside these triangles and use random functions.

    Here is my code on Git: https://github.com/sukim/NatureOfCode-homework/tree/master/week5_1/triangle_rotation4

    triangle.jpg
    Screen Shot 2013-03-06 at 10.06.43 PM.png

    This was my base template that I referenced. Rotating seems so easy when you are just looking at it right?