Quite some time past since the last Women Techmakers Vienna event. I still get positive feedback and even more: people ask me to upload the content of the Impostor Syndrome workshop online. It is always great to see how helpful such information is. The workshop helped me a great deal!
So, here it goes, find the resources of the workshop right here!
The materials for this workshop were inspired by the Ada Initiative. Find the original information on their website. This content was slightly changed to fit the Women Techmakers Vienna conference. (the main content remained the same).
There are several resources available fro the workshop. Find all the materialshere.
Available materials are:
A handout of the workshop explains in general what impostor syndrome is and how to overcome it. It also contains other references.
The workshop was created in such a way that it can be easily taken over and presented in other workshops/conferences as well. A facilitator guide example can be found also.
There were 3 exercises conducted in the workshop. Their description can also be found in the above link.
Thank you to the ones who attended and thank you to the ones who are interested in this topic.
I had the opportunity to hold a presentation about what I have been doing lately: a Web Application to show off the power of SPARQL. I turned my experience into an introduction of how to “Developing for the Semantic Web”.
Last week was the Google I/O Developer conference and Polymer 1.0 was presented. So finally my curiosity was sparked and I made some time to check it out a little bit. I was looking for a fast way to create a JAVA Web Application where I can use Polymer so I heard about how easy and fast Spring Boot is.
So voilà, my first JAVA Web App with Spring Boot and Polymer 1.0. You can clone it from Git and use it as a archetype – the Polymer files are included in the project already. (also for learning purposes). I used Maven to build the project, which is also easy. But one can also use Gradle.
One of my tasks at university was to download data from the Twitter public stream and analyse it. This work was easier with a tool that allows visualizing the number of downloads per hour/day/month.
The API I used to download tweets is the one based on Adam Green’s implementation called 140dev. He also has a visualizing tool for the downloaded tweets. However this has less to do with numbers rather much more with the tweet texts.
The code for my implementation can be found on my GitHub repository.
It contains simple bar charts of the number of tweets downloaded.
Working with the Twitter public stream I navigated a lot of questions which I found or did not find answeres to:
How can one download tweets only for a specific country?
When is the rate limit reached?
If the rate limit is reached how loang do I have to wait until I can download again?
Why do some Twitter user accounts work and some do not?
And so on…
My time at university was only one part about these and the rest I will probably tell in another post.
The experience of delivering a training is entirely new compared to learning a tutorial on your own. Already in the preparation of the tutorial I learned a lot. I also got some feedback from the participants and mostly I noticed while I was presenting what was missing. I wrote down what went wrong and also what was good. From the feedback and my own observations I came up with a new improved presentation and more organized tutorials.