Iam trying to figure out whats wrong in my solution for the coursera Scala course , 3rd assignment. I have figured out everything else , other than the method to convert a TweetSet into a descending tweetList.
I have also tried some online solutions , but those did not work either. _akojq/scala-week-3Just like my solution , this gives an "Almost" descending list , but the list is not perfectly descending. Anyways I have spent the entire day and have passed the assignment, so am moving on to the next week.
Consider the case where, for example, elem.retweets = 5, mostRetweetedRight.retweets = 0 and mostRetweetedLeft.retweets = 5. The return value would be mostRetweetedRight which obviously is incorrect. You should add >= and
LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
I completed the course requirements in a week. I signed up for the Foundations course and completed that on Day 1, and then on Day 7 I had all the peer reviews back for the Capstone and got confirmation that I had passed the certificate.
I have 20+ years experience in project management, so I am definitely not a beginner. The Foundations module, for example, was really easy for me, and I whizzed through that. The more experience you have working in a project environment, the easier it will be for you.
First, I signed up to audit the courses. Then I had a good look around the course materials for free. You can watch the videos, review the readings and download the templates without paying anything.
You only have to sign up as a student and start paying when you commit to earning the certificate for real, as you have to be subscribed to be able to submit graded assignments and quizzes, and peer-reviewed assignments.
Sign up to audit the course first, and then convert to a paid student when you are sure you have the time to commit to doing the assessed work. If you are offered a free trial, you can also take that and do as much of the course as possible in the free trial period.
There are 6 courses to do in the Google certificate. The easiest is the Foundation, which has no peer-reviewed assignments. The hardest is (unsurprisingly) the Capstone. Each course has multiple modules.
Because of the speed I was going through the materials, I needed a tracker. I just wrote out a list of the courses and modules and made a note of what still needed to be done on them. It kept me focused on what was missing, what assignments I needed to submit (or resubmit) and made it easy to go straight to what needed doing when I had a spare moment.
However, there is some small print I read that said if you submit a peer-reviewed assignment after the deadline you might need more than one peer to review it. If you submit on the deadline, you only need one reviewer.
Do you have to take the Google Project Management certificate courses in order? No, absolutely not. I would suggest you leave the Capstone until the end, but any of the others you can do in any order.
I made this mistake once myself. There are two exercises where you have to write emails and I uploaded the wrong email file for an assignment. Believe me, when people score your assignment as 0/10 and you fail, it really stings!
In real life you might want to add more context and more words, but generally short is good. There are only certain things that will be graded anyway, so any text you put in additional boxes is not going to score you extra.
All those worries were completely unfounded. There are plenty of people going through the same experience. I was reviewing papers uploaded just that day, and people were reviewing mine within hours.
The Capstone is a lot more work than any of the other courses. I think there are only 5 peer-reviewed assignments in the whole of the rest of the course, and while that gets you used to the process and the expectation, the Capstone takes it to a whole new level.
You can definitely pass the Google Project Management Certificate, I have no doubt. There are no tutor-assessed assignments. You can take the graded quizzes as many times as you like (within the system constraints) and if you plod through the work, you can do it!
In the first week there are only a few lessons before the first assignment, which is optional. Following this, there are some short lessons on imagery in general, and then information on how to do visual research for the first real assignment.
Immediately, I try to make a short list of animals I could used as my subject. Panda, fox, snail, penguin, tiger, sheep, monkey, or a moose. I even loaded up Animal Crossing to see if some of the animal villagers there would provide some inspiration. Maybe a squirrel, owl, or racoon? A trash panda would be fun to draw.
On to the next assignment, which is the mandatory assignment for this week, and represents 20% of the grade for this course. Students are asked to use five different mediums to create an image of their chosen subject.
Over the past several weeks I have been helping students, career professionals, and people of other backgrounds learn R. During this time one this has become apparent, people are teaching the old paradigm of R and avoiding the tidyverse all together.
In my written response to her, I gave her solutions to her problems in base R and using the tidyverse. Here, I will go over the problems and adress them from a tidy perspective. This will not serve as a full introduction to the tidyverse. For an introduction and a reason why the tidyverse is superior to base R, I leave you with Stat 545: Introduction to dplyr
This problem gives us a directory of files from which we need to read in the data based on the provided IDs. For the sake of this walk through we will randomly sample 10 values within the range designated in the problem statement (332).
We will first generate random IDs, then identify all of the files within the specified directory and obtain their file paths using the list.files() function. After this we will subset our file list based on the IDs, then iterate over our file list and read in each file as a csv using purrr:map_df() combined with readr::read_csv(). Fortunately map_df() returns a nice and pretty data frame which lets us avoid having to explicitly bind each unique data frame.
Next we identify the files we need based on the sampled ids and store the subset in the files_filtered variable. We use the values of the ids to locate the file paths positionally. For example, ID number 1 is the first file, number 10 is the tenth, etc.
Now that we have identified the files that we are going to read in, we can use purrr:map_df() to apply the readr::read_csv() function to each value of files_filtered and return a data frame (hence the _df() suffix). We supply additional arguments to read_csv() to ensure that every column is read in properly.
Next, we get to utilize some dplyr magic. Here we take the specdata object we created from reading in our files, deselct the Date column, then utilize summarise_if() to apply the mean() function to our data. summarise_if() requires that we provide a logical statement as the first argument. If (hence the _if() suffix) the logical statement evaluates to TRUE on a column then it will apply a list of functions to those columns where the statement evaluated to TRUE. We can also specify additional arguments to the functions. Here we specify na.rm = TRUE for handling missing values.
Within the function we take everything we did in the above steps but generalize it to a function. We identify the files in the directory provided (specdata), subset the files positionally based on the provided id vector, and then iterate over the file names and read them in with map_df() and read_csv().
Write a function that reads a directory full of files and reports the number of completely observed cases in each data file. The function should return a data frame where the first column is the name of the file and the second column is the number of complete cases.
The assignment provides an example function format, but I think it to be a bit misleading. So I will go about this in the way I think is best. We will work on creating a function called complete_spec_cases() which will take only two arguments, directory, and id. directory and id will be used in the the same way as the previous problem.
For this problem our goal is to identify how many complete cases there are by provided ID. This should be exceptionally simple. We will have to identify our files, subset them, and read them in the same way as before. Next we can identify complete cases by piping our specdata object to na.omit() which will remove any row with a missing value. Next, we have to group by the ID column and pipe our grouped data frame to count() which will count how many observations there are by group. We will then return this data frame to the user.
Write a function that takes a directory of data files and a threshold for complete cases and calculates the correlation between sulfate and nitrate for monitor locations where the number of completely observed cases (on all variables) is greater than the threshold. The function should return a vector of correlations for the monitors that meet the threshold requirement. If no monitors meet the threshold requirement, then the function should return a numeric vector of length 0. A prototype of this function follows:
Let keep this simple. The above statement essentially is asking that we find the correlation between nitrate and sulfate for each monitoring station (ID). But there is a catch! Each ID must meet a specified threshold of complete cases, and if none of the monitors meet the requirement the function must return a numeric(0).
For the sake of this example, we will continue to use the specdata object we created in previous examples, and we will set our threshold to 100. Once we identify the stations with the proper number of counts (> 100), we will store that data frame in an object called id_counts
3a8082e126