NPTEL Big Data Computing Week 5 Assignment 5 Answers 2022

NPTEL Big Data Computing Week 5 Assignment 5 Answer: In this article, you will find NPTEL Big Data Computing Week 5 Assignment 5 Answer , Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer. 

NPTEL Big Data Computing Week 6 Assignment 6 Answers 👇

Note: We are trying to give our best please share this answer link with other students also.

NPTEL Big Data Computing Week 5 Assignment 5 Answers 2022

NPTEL Big Data Computing Week 5 Assignment 5 Answer 2022 :-


Q1. True or False ?

Apache HBase is a column-oriented, NoSQL database designed to operate on top of the Hadoop distributed file system (HDFS).

  • True
  • False

Q.2. A small chunk of data residing in one machine which is part of a cluster of machines holding one HBase table is known as

  • Rowarea
  • Tablearea
  • Region
  • Split

NPTEL Big Data Computing Week 6 Assignment 6 Answers 👇


Q.3. In HBase what is the number of MemStore per column family?

  • 1
  • 2
  • 3
  • Equal to as many columns in the course family.

Q.4. In HBase ___ is a combination of row, column family, column qualifier and contains a value and a timestamp.

  • Stores
  • HMaster
  • Region Server
  • Cell

Q.5. HBase architecture has 3 main components

  • Client, Column family, Region Server
  • HMaster, Region Server, Zookeeper
  • Cell, Rowkey, Stores
  • HMaster, Stores, Region Server

Q.6. True or False?

Kafka is a high performance, real time messaging system. It is an open source tool and is a part of Apache projects.

  • True
  • False

For Week 6 Assignment Answers Join this Group👇


Q.7. Kafka maintains feeds of messages in categories called __

  • Chunks
  • Domains
  • Messages
  • Topics

Q.8. True or False ?

Statement 1: Batch Processing provides ability to process and analyze data at-rest (stored data)

Statement 2: Stream Processing provides ability to ingest, process and analyze data in-motion in real or near-real-time.

  • Only Statement 1 is true
  • Only Statement 2 is true
  • Both Statements are true.
  • Both Statements are False.

NPTEL All Weeks Assignment Solution: Click Here


Q.9. What exactly Kafka key capabilities?

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system
  • Store streams of records in a fault-tolerant durable way
  • Process streams of records as they occur
  • All of the mentioned

Q.10. ___ is a framework to import event streams from other source data systems into Kafka and export event streams from Kafka to destination data systems.

  • Kafka Core
  • Kafka Connect
  • Kalka Streams
  • None of the mentioned

Q.11. ___ is a central hub to transport and store event streams in real time.

  • Kafka Core
  • Kafka Connect
  • Kafka Streams
  • None of the mentioned

Q.12. __ is a Java library to process event streams live as they occur.

  • Kafka Core
  • Kafka Connect
  • Kafka Streams
  • None of the mentioned



For Any Changes in Answers Join this Group👇


This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.


About NPTEL Big Data Computing Course: 

In today’s fast-paced digital world , the incredible amount of data being generated every minute has grown tremendously from sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and GPS signals from cell phone to name a few. This amount of large data with different velocities and varieties is termed as big data and its analytics enables professionals to convert extensive data through statistical and quantitative analysis into powerful insights that can drive efficient decisions.


NPTEL Big Data Computing Week 6 Assignment 6 Answers Join this Group👇



The course structure and content covers, over a period of 8 weeks:

  • Week 1 : Introduction to Big Data
  • Week 2 : Introduction to Enabling Technologies for Big Data
  • Week 3 : Introduction to Big Data Platforms
  • Week 4 : Introduction to Big Data Storage Platforms for Large Scale Data Storage
  • Week 5 : Introduction to Big Data Streaming Platforms for Fast Data
  • Week 6 : Introduction to Big Data Applications (Machine Learning)
  • Week 7 : Introduction of Big data Machine learning with Spark
  • Week 8 : Introduction to Big Data Applications (Graph Processing)

Sharing Is Caring

Leave a Comment