Mc Creary Central High School Statistics and Central Tendency Exercises Mathematics Assignment Help. Mc Creary Central High School Statistics and Central Tendency Exercises Mathematics Assignment Help.
I’m working on a statistics question and need support to help me learn.
(/0x4*br />
Review of Central Tendency and other topics
undefined
Purpose: To reinforce student’s understanding and application of descriptive statistics, histograms, and skewness.
undefined
1. Using the data below, answer the questions that follow
undefined
4, 17, 9, 10, 3, 14, 13, 12, 15, 3, 8, 10, 4, 15, 14, 4, 19, 10, 11, 12, 15
undefined
- Draw a histogram (be sure your axes are properly labeled).
undefined
The first step to draw a histogram is to find the class boundaries, in this case due to the size of the data set, 5 classes will be enough.
undefined
We start by finding the range which is:
undefined
undefined
But to make it look nicer we will use a Range of 0 to 20 and 5 classes
undefined
undefined
Now we can find the classes, we start with the min and we add the class size until we reach the max.
undefined
Calculations |
Class |
Freq. |
0+4=4 |
0 to 4 |
2 |
4+4=8 |
4 to 8 |
3 |
8+4=12 |
8 to 12 |
6 |
12+4=16 |
12 to 16 |
8 |
16+4=20 |
16 to 20 |
2 |
undefined
undefined
With this information we can plot the following graph.
Mc Creary Central High School Statistics and Central Tendency Exercises Mathematics Assignment Help[supanova_question]
TWU Organizational Issues in Crises American Airlines Discussion Writing Assignment Help
I’m working on a writing multi-part question and need a sample draft to help me understand better.
Researching Organizational Issues in Crises Assignment
answer these questions on the lay-off problems that American Airlines is facing because of Covid-19.
Respond to these topics.
1) Explain the problems the American Airlines is facing.
2) How has the American Airlines communicated their problems to the public, their customers, the government, and their employees?
3) What mistakes did they make in their communication strategies?
4) How could American Airlines improve their communication strategy?
Write the questions out completely and then answer them. This paper should be 7 to 10 pages. Be sure to use and to document outside research. Use the new APA format.
[supanova_question]
MIS 600 GCU Titos Vodka Establishes Brand Loyalty With Authentic Social Strategy Case Writing Assignment Help
I’m working on a writing case study and need a sample draft to help me learn.
Read “Application Case 5.8: Tito’s Vodka Establishes Brand Loyalty With an Authentic Social Strategy,” located in Chapter 5 of the textbook.
In 50-100 words each, address the questions presented at the end of the case study.
If Tito’s Handmade Vodka had to identify a single social media metric that most accurately reflects its mission, it would be engagement. Connecting with vodka lovers in an inclusive, authentic way is some-thing Tito’s takes very seriously, and the brand’s social strategy reflects that vision.Founded nearly two decades ago, the brand credits the advent of social media with playing an integral role in engaging fans and raising brand awareness. In an interview with Entrepreneur, founder Bert “Tito” Beveridge credited social media for enabling Tito’s to compete for shelf space with more established liquor brands. “Social media is a great platform for a word-of-mouth brand, because it’s not just about who has the biggest megaphone,” Beveridge told Entrepreneur.As Tito’s has matured, the social team has remained true to the brand’s founding values and actively uses Twitter and Instagram to have one-on-one conversations and connect with brand enthusi-asts. “We never viewed social media as another way to advertise,” said Katy Gelhausen, Web & Social Media Coordinator. “We’re on social so our custom-ers can talk to us.”To that end, Tito’s uses Sprout Social to under-stand the industry atmosphere, develop a consistent social brand, and create a dialogue with its audience. Recently and as a result, Tito’s organically grew its Twitter and Instagram communities by 43.5% and 12.6%, respectively, within 4 months.Informing a Seasonal, Integrated Marketing StrategyTito’s quarterly cocktail program is a key part of the brand’s integrated marketing strategy. Each quarter, a cocktail recipe is developed and distributed through Tito’s online and offline marketing initiatives.It is important for Tito’s to ensure the recipe is aligned with the brand’s focus as well as larger industry direction. Therefore, Gelhausen uses Sprout’s Brand Keywords to monitor industry trends and cocktail flavor profiles. “Sprout has been a really important tool for social monitoring. The Inbox is a nice way to keep on top of hashtags and see general trends in one stream,” said Gelhausen.These learnings are presented to Tito’s in-house mixology team and used to ensure the same quarterly recipe is communicated to the brand’s sales team and across marketing channels. “Whether you’re drinking Tito’s at a bar, buying it from a liquor store or following us on social media you’re getting the same quarterly cocktail,” said Gelhausen.The program ensures that, at every consumer touchpoint, a person is receiving a consistent brand experience—and that consistency is vital. In fact, according to an Infosys study on the omnichannel shopping experience, 34% of consumers attribute cross-channel consistency as a reason they spend more with a brand. Meanwhile, 39% cite inconsist-ency as a reason enough to spend less.At Tito’s, gathering industry insights starts with social monitoring on Twitter and Instagram through Sprout. But the brand’s social strategy doesn’t stop there. Staying true to its roots, Tito’s uses the plat-form on a daily basis to authentically connect with customers.Sprout’s Smart Inbox displays Tito’s Twitter and Instagram accounts in a single, cohesive feed. This helps Gelhausen manage inbound messages and quickly identify which require a response.“Sprout allows us to stay on top of the conver-sations we’re having with our followers. I love how you can easily interact with content from multiple accounts in one place,” said Gelhausen.Spreading the Word on TwitterTito’s approach to Twitter is simple: engage in per-sonal, one-on-one conversations with fans. Dialogue is a driving force for the brand, and over the course of 4 months, 88% of Tweets sent were replies to inbound messages.Using Twitter as an open line of communica-tion between Tito’s and its fans resulted in a 162.2% increase in engagement and a 43.5% gain in follow-ers. Even more impressively, Tito’s ended the quarter with 538,306 organic impressions—an 81% rise. A similar strategy is applied to Instagram, which Tito’s uses to strengthen and foster a relationship with fans by publishing photos and videos of new recipe ideas, brand events and initiatives.Capturing the Party on InstagramOn Instagram, Tito’s primarily publishes lifestyle content and encourages followers to incorporate the brand into everyday occasions. Tito’s also uses the platform to promote its cause marketing efforts and to tell its brand story. The team finds value in Sprout’s Instagram Profiles Report, which helps them identify what media is receiving the most engagement, analyze audience demographics and growth, dive deeper into publishing patterns, and quantify outbound hashtag performance. “Given Instagram’s new personalized feed, it’s important that we pay attention to what really does resonate,” said Gelhausen.Using the Instagram Profiles Report, Tito’s has been able to measure the impact of its Instagram marketing strategy and revise its approach accord-ingly. By utilizing the network as another way to engage with fans, the brand has steadily grown its organic audience. In 4 months, @TitosVodka saw a 12.6% rise in followers and a 37.1% increase in engagement. On average, each piece of published content gained 534 interactions, and mentions of the brand’s hashtag, #titoshandmadevodka, grew by 33%.Where to from Here?Social is an ongoing investment in time and atten-tion. Tito’s will continue the momentum the brand experienced by segmenting each quarter into its own campaign. “We’re always getting smarter with our social strategies and making sure that what we’re posting is relevant and resonates,” said Gelhausen. Using social to connect with fans in a consist-ent, genuine, and memorable way will remain a cornerstone of the brand’s digital marketing efforts. Using Sprout’s suite of social media management tools, Tito’s will continue to foster a community of loyalists.
Highlights:□□A 162% increase in organic engagement on Twitter□□An 81% increase in organic Twitter impressions□□A 37% increase in engagement on Instagram
Questions for Discussion
1. How can social media analytics be used in the consumer products industry?
2. What do you think are the key challenges, poten-tial solutions, and probable results in applying social media analytics in consumer products and services firms?
[supanova_question]
MIS 600 Grand Canyon University IBM Approach to Text Analytics Discussion Writing Assignment Help
I’m working on a writing exercise and need a sample draft to help me study.
Read “About IBM SPSS Modeler Text Analytics,” view “Text Analytics in IBM SPSS Modeler 18.2,” located in the study materials, and compare to section 5.5 in Chapter 5 of the textbook.
In 100-150 words, discuss whether the IBM approach is consistent with what is in the textbook. Provide examples to support your rationale.
Read “About IBM SPSS Modeler Text Analytics,” located on the IBM website.
View “Text Analytics in IBM SPSS Modeler 18.2,” from DTE (2019), located on the YouTube website.
Section 5.5 Text Mining ProcessTo be successful, text mining studies should follow a sound methodology based on best practices. A standardized process model is needed similar to Cross-Industry Standard Process for Data Mining (CRISP-DM), which is the industry standard for data mining pro-jects (see Chapter 4). Even though most parts of CRISP-DM are also applicable to text min-ing projects, a specific process model for text mining would include much more elaborate data preprocessing activities. Figure 5.5 depicts a high-level context diagram of a typical text mining process (Delen & Crossland, 2008). This context diagram presents the scope of the process, emphasizing its interfaces with the larger environment. In essence, it draws boundaries around the specific process to explicitly identify what is included in (and excluded from) the text mining process.As the context diagram indicates, the input (inward connection to the left edge of the box) into the text-based knowledge-discovery process is the unstructured as well as struc-tured data collected, stored, and made available to the process. The output (outward exten-sion from the right edge of the box) of the process is the context-specific knowledge that can be used for decision making. The controls, also called the constraints (inward connection to the top edge of the box), of the process include software and hardware limitations, privacy issues, and the difficulties related to processing of the text that is presented in the form of natural language. The mechanisms (inward connection to the bottom edge of the box) of the process include proper techniques, software tools, and domain expertise. The primary pur-pose of text mining (within the context of knowledge discovery) is to process unstructured (textual) data (along with structured data, if relevant to the problem being addressed and available) to extract meaningful and actionable patterns for better decision making. 268□ □□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□SECTION 5.4 REVIEW QUESTIONS1. List and briefly discuss some of the text mining applications in marketing.2. How can text mining be used in security and counterterrorism?3. What are some promising text mining applications in biomedicine?5.5 Text Mining ProcessTo be successful, text mining studies should follow a sound methodology based on best practices. A standardized process model is needed similar to Cross-Industry Standard Process for Data Mining (CRISP-DM), which is the industry standard for data mining pro-jects (see Chapter 4). Even though most parts of CRISP-DM are also applicable to text min-ing projects, a specific process model for text mining would include much more elaborate data preprocessing activities. Figure 5.5 depicts a high-level context diagram of a typical text mining process (Delen & Crossland, 2008). This context diagram presents the scope of the process, emphasizing its interfaces with the larger environment. In essence, it draws boundaries around the specific process to explicitly identify what is included in (and excluded from) the text mining process.As the context diagram indicates, the input (inward connection to the left edge of the box) into the text-based knowledge-discovery process is the unstructured as well as struc-tured data collected, stored, and made available to the process. The output (outward exten-sion from the right edge of the box) of the process is the context-specific knowledge that can be used for decision making. The controls, also called the constraints (inward connection to the top edge of the box), of the process include software and hardware limitations, privacy issues, and the difficulties related to processing of the text that is presented in the form of natural language. The mechanisms (inward connection to the bottom edge of the box) of the process include proper techniques, software tools, and domain expertise. The primary pur-pose of text mining (within the context of knowledge discovery) is to process unstructured (textual) data (along with structured data, if relevant to the problem being addressed and available) to extract meaningful and actionable patterns for better decision making.Extract knowledge from available data sourcesA0Unstructured data (text)Structured data (databases)Context-specific knowledgeSoftware/hardware limitationsPrivacy issuesLinguistic limitationsTools and techniquesDomain expertise,
At a very high level, the text mining process can be broken down into three consec-utive tasks, each of which has specific inputs to generate certain outputs (see Figure 5.6). If, for some reason, the output of a task is not that which is expected, a backward redirec-tion to the previous task execution is necessary.Task 1: Establish the CorpusThe main purpose of the first task activity is to collect all the documents related to the context (domain of interest) being studied. This collection may include textual documents, XML files, e-mails, Web pages, and short notes. In addition to the readily available textual data, voice recordings may also be transcribed using speech-recognition algorithms and made a part of the text collection.Once collected, the text documents are transformed and organized in a manner such that they are all in the same representational form (e.g., ASCII text files) for computer processing. The organization of the documents can be as simple as a collection of digitized text excerpts stored in a file folder or it can be a list of links to a collection of Web pages in a specific domain. Many commercially available text mining software tools could accept these as input and convert them into a flat file for processing. Alternatively, the flat file can be prepared out-side the text mining software and then presented as the input to the text mining application.Task 2: Create the Term–Document MatrixIn this task, the digitized and organized documents (the corpus) are used to create the term–document matrix (TDM). In the TDM, rows represent the documents and columns represent the terms. The relationships between the terms and documents are characterized by indices (i.e., a relational measure that can be as simple as the number of occurrences of the term in respective documents). Figure 5.7 is a typical example of a TDM.The goal is to convert the list of organized documents (the corpus) into a TDM where the cells are filled with the most appropriate indices. The assumption is that the essence of a document can be represented with a list and frequency of the terms used in that document. However, are all terms important when characterizing documents? Obviously, the answer is “no.” Some terms, such as articles, auxiliary verbs, and terms used in almost all the docu-ments in the corpus, have no differentiating power and, therefore, should be excluded from the indexing process. This list of terms, commonly called stop terms or stop words, is spe-cific to the domain of study and should be identified by the domain experts. On the other hand, one might choose a set of predetermined terms under which the documents are to be indexed (this list of terms is conveniently called include terms or dictionary). In addition, synonyms (pairs of terms that are to be treated the same) and specific phrases (e.g., “Eiffel Tower”) can also be provided so that the index entries are more accurate.
At a very high level, the text mining process can be broken down into three consec-utive tasks, each of which has specific inputs to generate certain outputs (see Figure 5.6). If, for some reason, the output of a task is not that which is expected, a backward redirec-tion to the previous task execution is necessary.Task 1: Establish the CorpusThe main purpose of the first task activity is to collect all the documents related to the context (domain of interest) being studied. This collection may include textual documents, XML files, e-mails, Web pages, and short notes. In addition to the readily available textual data, voice recordings may also be transcribed using speech-recognition algorithms and made a part of the text collection.Once collected, the text documents are transformed and organized in a manner such that they are all in the same representational form (e.g., ASCII text files) for computer processing. The organization of the documents can be as simple as a collection of digitized text excerpts stored in a file folder or it can be a list of links to a collection of Web pages in a specific domain. Many commercially available text mining software tools could accept these as input and convert them into a flat file for processing. Alternatively, the flat file can be prepared out-side the text mining software and then presented as the input to the text mining application.Task 2: Create the Term–Document MatrixIn this task, the digitized and organized documents (the corpus) are used to create the term–document matrix (TDM). In the TDM, rows represent the documents and columns represent the terms. The relationships between the terms and documents are characterized by indices (i.e., a relational measure that can be as simple as the number of occurrences of the term in respective documents). Figure 5.7 is a typical example of a TDM.The goal is to convert the list of organized documents (the corpus) into a TDM where the cells are filled with the most appropriate indices. The assumption is that the essence of a document can be represented with a list and frequency of the terms used in that document. However, are all terms important when characterizing documents? Obviously, the answer is “no.” Some terms, such as articles, auxiliary verbs, and terms used in almost all the docu-ments in the corpus, have no differentiating power and, therefore, should be excluded from the indexing process. This list of terms, commonly called stop terms or stop words, is spe-cific to the domain of study and should be identified by the domain experts. On the other hand, one might choose a set of predetermined terms under which the documents are to be indexed (this list of terms is conveniently called include terms or dictionary). In addition, synonyms (pairs of terms that are to be treated the same) and specific phrases (e.g., “Eiffel Tower”) can also be provided so that the index entries are more accurate.Establish the Corpus:Collect and organizethe domain-specificunstructured dataCreate the Term-Document Matrix:Introduce structureto the corpusExtract Knowledge:Discover novelpatterns from theT-D matrixThe inputs to the processinclude a variety of relevant unstructured (and semi-structured) data sources such as text, XML, HTML, etc. The output of Task 1 is a collection of documents in some digitized format for computer processing The output of Task 2 is a flatfile called a term-documentmatrix where the cells arepopulated with the term frequenciesThe output of Task 3 is a number of problem-specific classification, association, clustering models and visualizationsTask 1Task 2Task 3FeedbackFeedback Knowledge12345DataText□□□□□□□□□□□□The Three-Step/Task Text Mining Process. place to accurately create the indices is stemming, which refers to the reduction of words to their roots so that, for example, different grammati-cal forms or declinations of a verb are identified and indexed as the same word. For example, stemming will ensure that modeling and modeled will be recognized as the word model.The first generation of the TDM includes all the unique terms identified in the cor-pus (as its columns), excluding the ones in the stop term list; all the documents (as its rows); and the occurrence count of each term for each document (as its cell values). If, as is commonly the case, the corpus includes a rather large number of documents, then there is a very good chance that the TDM will have a very large number of terms. Processing such a large matrix might be time-consuming and, more important, might lead to extrac-tion of inaccurate patterns. At this point, one has to decide the following: (1) What is the best representation of the indices? and (2) How can we reduce the dimensionality of this matrix to a manageable size?□□□□□□□□□□□□□□□□□□□□□□□□ Once the input documents are indexed and the ini-tial word frequencies (by document) computed, a number of additional transformations can be performed to summarize and aggregate the extracted information. The raw term frequencies generally reflect on how salient or important a word is in each document. Specifically, words that occur with greater frequency in a document are better descriptors of the contents of that document. However, it is not reasonable to assume that the word counts themselves are proportional to their importance as descriptors of the documents. For example, if a word occurs one time in document A, but three times in document B, then it is not necessarily reasonable to conclude that this word is three times as important a descriptor of document B as compared to document A. To have a more consistent TDM for further analysis, these raw indices need to be normalized. As opposed to showing the actual frequency counts, the numerical representation between terms and documents can be normalized using a number of alternative methods, such as log frequencies, binary frequencies, and inverse document frequencies, among others.□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□ Because the TDM is often very large and rather sparse (most of the cells filled with zeros), another important question is, “How do we reduce the dimensionality of this matrix to a manageable size?” Several options are available for managing the matrix size. □□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□ 271□□A domain expert goes through the list of terms and eliminates those that do not make much sense for the context of the study (this is a manual, labor-intensive process).□□Eliminate terms with very few occurrences in very few documents.□□Transform the matrix using SVD.Singular value decomposition (SVD), which is closely related to principal com-ponents analysis, reduces the overall dimensionality of the input matrix (number of input documents by number of extracted terms) to a lower-dimensional space, where each consecutive dimension represents the largest degree of variability (between words and documents) possible (Manning & Schutze, 1999). Ideally, the analyst might identify the two or three most salient dimensions that account for most of the variability (differences) between the words and documents, thus identifying the latent semantic space that orga- nizes the words and documents in the analysis. Once such dimensions are identified, the underlying “meaning” of what is contained (discussed or described) in the documents has been extracted.Task 3: Extract the KnowledgeUsing the well-structured TDM, and potentially augmented with other structured data ele-ments, novel patterns are extracted in the context of the specific problem being addressed. The main categories of knowledge extraction methods are classification, clustering, asso-ciation, and trend analysis. A short description of these methods follows.□□□□□□□□□□□□□□ Arguably the most common knowledge-discovery topic in analyzing complex data sources is the classification (or categorization) of certain objects. The task is to classify a given data instance into a predetermined set of categories (or classes). As it applies to the domain of text mining, the task is known as text categorization, where for a given set of categories (subjects, topics, or concepts) and a collection of text documents the goal is to find the correct topic (subject or concept) for each document using models developed with a training data set that includes both the documents and actual document categories. Today, automated text classification is applied in a variety of contexts, includ-ing automatic or semiautomatic (interactive) indexing of text, spam filtering, Web page categorization under hierarchical catalogs, automatic generation of metadata, detection of genre, and many others.The two main approaches to text classification are knowledge engineering and machine learning (Feldman & Sanger, 2007). With the knowledge-engineering approach, an expert’s knowledge about the categories is encoded into the system either declaratively or in the form of procedural classification rules. With the machine-learning approach, a general inductive process builds a classifier by learning from a set of reclassified exam-ples. As the number of documents increases at an exponential rate and as knowledge experts become harder to come by, the popularity trend between the two is shifting toward the machine-learning approach.□□□□□□□□□□ Clustering is an unsupervised process whereby objects are classified into “natural” groups called clusters. Compared to categorization, where a collection of preclassified training examples is used to develop a model based on the descriptive features of the classes to classify a new unlabeled example, in clustering the problem is to group an unlabeled collection of objects (e.g., documents, customer comments, Web pages) into meaningful clusters without any prior knowledge.Clustering is useful in a wide range of applications, from document retrieval to en- abling better Web content searches. In fact, one of the prominent applications of cluster-ing is the analysis and navigation of very large text collections, such as Web pages. The earch in such a way that when a query matches a document its whole cluster is returned.□□Improved search precision. Clustering can also improve search precision. As the number of documents in a collection grows, it becomes difficult to browse through the list of matched documents. Clustering can help by grouping the documents into a number of much smaller groups of related documents, ordering them by relevance and returning only the documents from the most relevant group (or groups).The two most popular clustering methods are scatter/gather clustering and query-specific clustering:□□Scatter/gather. This document browsing method uses clustering to enhance the efficiency of human browsing of documents when a specific search query cannot be formulated. In a sense, the method dynamically generates a table of contents for the collection and adapts and modifies it in response to the user selection.□□Query-specific clustering. This method employs a hierarchical clustering approach where the most relevant documents to the posed query appear in small tight clusters that are nested in larger clusters containing less-similar documents, cre-ating a spectrum of relevance levels among the documents. This method performs consistently well for document collections of realistically large sizes.□□□□□□□□□□□ A formal definition and detailed description of association was pro-vided in the chapter on data mining (Chapter 4). Associations or association rule learning in data mining is a popular and well-researched technique for discovering interesting relationships among variables in large databases. The main idea in generating association rules (or solving market-basket problems) is to identify the frequent sets that go together.In text mining, associations specifically refer to the direct relationships between concepts (terms) or sets of concepts. The concept set association rule A + C relating two frequent concept sets A and C can be quantified by the two basic measures of support and confidence. In this case, confidence is the percentage of documents that include all the concepts in C within the same subset of those documents that include all the concepts in A. Support is the percentage (or number) of documents that include all the concepts in A and C. For instance, in a document collection the concept “Software Implementation Failure” may appear most often in association with “Enterprise Resource Planning” and “Customer Relationship Management” with significant support (4%) and confidence (55%), meaning that 4% of the documents had all three concepts represented together in the same document, and of the documents that included “Software Implementation Failure,” 55% of them also included “Enterprise Resource Planning” and “Customer Relationship Management.”Text mining with association rules was used to analyze published literature (news and academic articles posted on the Web) to chart the outbreak and progress of the bird flu (Mahgoub et al., 2008). The idea was to automatically identify the association among the geographic areas, spreading across species, and countermeasures (treatments).□□□□□□□□□□□□□□ Recent methods of trend analysis in text mining have been based on the notion that the various types of concept distributions are functions of document collections; that is, different collections lead to different concept distributions for the same set of concepts. It is, therefore, possible to compare two distributions that are otherwise identical except that they are from different subcollections. One notable direction of this type of analysis is having two collections from the same source (such as from the same set of academic journals) but from different points in time. Delen and Crossland (2008) applied trend analysis to a large number of academic articles (published in the three highest-rated academic journals) to identify the evolution of key concepts in the field of information systems.
[supanova_question]
NYU Company Strategies Influenced by The Lawful Framework Question Economics Assignment Help
I’m working on a economics writing question and need an explanation to help me study.
Q1:
1. What are the two main categories of horizontal practices that would raise antitrust concerns? (1 point)
2. Briefly describe a real-world example for each of those two main categories. Avoid using the cases presented in the case studies in the Antitrust Revolution textbook. (2 points)
Q2:
1. What are “facilitating practices” in the context of anticompetitive horizontal practices? (1 point)
2. Give two examples of such “facilitating practices”. (2 points)
Q3:
1. What is “classic predation”? (1 point)
2. Chicago School adherents have argued that predation is rarely rational. What are the two main reasons that they provide to support their claim? (2 points)
Q4:
Claim: “Following the entrance of a new coffee chain in a city, the incumbent coffee chain lowers its price by more than half. Thus, the incumbent is engaging in predatory pricing.” Do you agree with this claim? Explain. How would you determine whether predatory pricing indeed took place? Describe the analytical steps that are required.
[supanova_question]
[supanova_question]
MATH 10C De Anza College Function of several variables Midterm Discussion Mathematics Assignment Help
The midterm usually have 4-5 questions and I think I can do most of them except the last one. The last one usually is harder so I might need some help with that.
I will send you the midterm paper once I start it, and you can focus on the last question first. I think you have 40 min to do it which is enough. If I have trouble with any other questions, I hope you can help me a little.
Attached file is the example midterm.
Including Chapter 10.2, 10.4, 11.1, 11.3-11.7 which are basically
-Function of several variables
-Partial derivatives
-Tangent planes and linear Approximation
-The chain rule
-Directional derivatives and the gradient vector
MATH 10C De Anza College Function of several variables Midterm Discussion Mathematics Assignment Help[supanova_question]
MKT 438 Phoenix Wk 5 Public Relations Role and Impact of Social Media in PR Essay Business Finance Assignment Help
Resources: The Practice of Public Relations: Ch. 10, Role and Impact of Social Media in PR Grading Guide
Review the Case “Don’t Mess with the Queen of Social Media” on page 221 in The Practice of Public Relations, Ch. 10, and use the questions at the end of the chapter as a basis of your discussion.
Describe what PR recommendations you would have for Taylor Swift if you were her Public Relations Consultant.
Incorporate the principles of PR that you have learned to date.
Develop a 700- to 1,050-word recommendation as part of your response.
Use two outside references to support your points.
Format your paper consistent with APA guidelines
Submit your assignment.
[supanova_question]
AHM 2020 Florida International University Polarization in American Society Essay Humanities Assignment Help
Write a critical response essay to engage from a historical perspective one recent development in the United States with local and global implications. To prepare your essay, 1) choose one of the two topics outlined below, 2) evaluate recent news coverage on that topic, and 3) select three credible reports that enable you to draw comparisons between historical and current developments.
1) African American / PoC Struggle for Equality
Several events in recent months have rekindled a debate on racial equality and fueled political activism. How do these efforts connect to the Civil Rights movement of the late 1950s and early 1960s? What continuities or differences are apparent? How do current initiatives and debates on the matter at the national level impact local communities and global perceptions of the US?
2) Polarization in American Society
The national discourse has grown heated and intense lately. Different groups in society and politics interact with each other in increasingly aggravated manners. Is this a new phenomenon (for instance as result of social media etc.) or the latest stage in a historical trend? How do deepening divides impact local communities, the nation itself, and perceptions of the US as a global power?
Write a 400- to 500-word response essay. Make sure that your essay has an introduction with a thesis statement, (short) thematic body paragraphs, and a very brief conclusion. Cite three credible (demonstrate that you can tell substantiated from questionable) news reports in your explanation of current events with footnotes and a bibliography. The Chicago Manual of Style Sheet (Links to an external site.) provides advice on how to cite online news articles (under “News or magazine article”). State the subject of your essay in the title.
Submit your essay as a .docx or .pdf file .
[supanova_question]
Collin County Community College Catechol Oxidase Reaction Lab Report Science Assignment Help
Download the report sheet, complete it, and then submit it via Blackboard email as an attachment.Do not change the format.
All blue shaded areas require answers.
Introduction: Catechol oxidase reaction
Name of substrates |
||
Name of enzyme |
||
Name of products |
||
Enzyme was extracted from |
||
Color of benzoquinone |
I. Preparation of standard tubes
Experiment (dry lab)
Copy the color intensity for the 20 min standard tubes from the slide.
Time (min) |
Tube A |
Tube B |
Tube C |
20 |
II.Specificity
Experiment (dry lab)
Record intensity (scale 0 – 5) using the standards (A-C) as your guide
Time (min) |
Tube D |
Tube E |
20 |
1. Based on the color intensity of the tubes, which substrate reacts better with catechol oxidase to produce benzoquinone? Choices are catechol (tube D) or hydroquinone (tube E). |
|
2. Does catechol oxidase express specificity? Yes or No. |
III.Temperature
At extremely high temperatures and pH, the structure of the enzyme is permanently changed to the point where it can no longer function. This change in structure of the enzyme is described as |
Experiment (dry lab)
Record the color of each tube at 20 minutes on a scale of 0-5 using your standards (A-C) as a guide. Plot your results on the graph below and connect the dots.
Time (min) |
50C ice water |
250C room temp water |
400C Water bath |
600C Water bath |
800C Water bath |
1000C Boiling water |
20 |
||||||
At what temperature does catechol oxidase work best? |
(Fill table using Microsoft insert shapes (or manually graph, scan and embed image into report.)
Experiment (at home lab)
QUESTION
Observe the color of the apple surface. Which slice has produced more benzoquinone, 4°C slice or 25°C slice? |
IV.PH
Experiment (dry lab)
Record the color of each tube at 20 minutes on a scale of 0-5 using your standards (A-C). Plot your results on the graph below and connect the dots.
Time (min) |
pH 2 |
pH 4 |
pH 6 |
pH 7 |
pH 8 |
pH 10 |
pH 12 |
20 |
|||||||
At what pH does catechol oxidase work best? |
(Fill table using Microsoft insert shapes (or manually graph, scan and embed image into report.)
Experiment (at home lab)
QUESTION
Observe the color of the apple surface. Which slice has produced most benzoquinone, pH3 (vinegar), pH7 (water), or pH12 (bleach)? |
V.Cofactor
Experiment (dry lab)
Record the color of each tube at 20 minutes on a scale of 0-5 using your standards (A-C).
Time (min) |
Tube F |
TubeG |
20 |
Tube G has enzyme mixed with EDTA. The EDTA inactivated the cofactor of catechol oxidase.
Is the cofactor (Cu++) necessary? In other words, does catechol oxidase in tube G need its cofactor to function? Yes or No. |
QUESTIONS:
1. What type of experiment was done in the lab, quantitative or qualitative? |
|
2. Can you think of a way to improve the experiment to make it less subjective and more reliable? (Hint: color is a type of light being reflected; which instrument measures reflected light?) |
After you complete the experiment and collect data,
1. Post a picture of your completed experiment showing the effect of temperature and pH on enzyme function. (2pts)
2. Explain your results in no less than 4 complete sentences. (2pts)
3. Ask a question/ comment on someone’s post from your class/answer a question, in a respectful and thoughtful way. (1pt)
[supanova_question]
University of Phoenix The Energy Policy and Natural Resources Presentation Science Assignment Help
You are part of a consulting group that has been invited by the presidential administration to present on energy policies and the use of natural resources.
Discuss the following in a 15- to 20-slide Microsoft® PowerPoint® presentation with notes:
- History of U.S. energy policies over the 20th and 21st centuries
- Comparison of coal, nuclear, and at least two renewable energy sources in terms of environmental effects, suitability for large-scale energy supply, and economic considerations
- Recommendations to the administration on how to improve the current energy policies considering environmental sustainability and economic growth
- Perform research on the Roadless Area Conservation Rule and the Bush administration’s attempt at its repeal. Present both sides of this issue and make recommendations regarding the continuance of the Roadless Rule.
Cite at least three references.
[supanova_question]
https://anyessayhelp.com/ provides advice on how to cite online news articles (under “News or magazine article”). State the subject of your essay in the title.
Submit your essay as a .docx or .pdf file .