[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

NoSQL Databases

146

Solutions

293

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Share tech news, updates, or what's on your mind.

Sign up to Post

How get the last document inserted in mongoDB

Im using nodejs, express mongoose
0
Exploring SQL Server 2016: Fundamentals
LVL 12
Exploring SQL Server 2016: Fundamentals

Learn the fundamentals of Microsoft SQL Server, a relational database management system that stores and retrieves data when requested by other software applications.

A new API requires us to receive about 45kb of information per row (record) and save this in a database, we see think about 700GB of data will be added PER year. The table is simple with 50 columns or fields. We are not sure if this is a job for MongoDB or MySQL. The setup must be able to scale horizontally, table will also be queried and searchable . About 1000 transactions per second.
0
I have a system in elasticsearch collecting performance metrics from my network. The space occupied is a lot, and I need to know how to optimize disk space. I have the idea that the project consists of having detailed samples of the last time (for example, one or two weeks). Beyond that time you can have historical samples, but with less detail.
I need to start organizing how to carry this forward.
0
I have two collection
Name and address which i exported as json from relation database

I want to merge that two collection as below how can i do that.


Thanks

I
{
    "_id" : ObjectId("5bba4cadb20b7e2f3c1436fd"),
    "o_id" : 5,
    "surname" : "sam",
    "firstname" : "sam",
    "created_date" : "2018-10-07",
    "address" : {
        "address1" : "177 road",
        "city" : "MI"
    }
}
0
redis used as cache data in at some point if we want to persist to SQL/No SQL is there a straight forward way?
0
Sir,

I need a help on  Cassandra data model, is it a better way to design our table data structure   so that our bulk reads would be faster
please  suggest


CREATE TABLE IF NOT EXISTS screening_counts_by_location_year (
      total_screened_0_6 counter,
      male_0_6 counter,
      female_0_6 counter,

      total_screened_6_18 counter,
      male_6_18 counter,
      female_6_18 counter,

      total_screened counter,
      total_male counter,
      total_female counter,

      awc_screened counter,
      school_screened counter,
      college_screened counter,

      year varchar,

      location varchar,
      parent varchar,
      PRIMARY KEY((parent, location, year)));



CREATE TABLE IF NOT EXISTS screening_counts_by_location_year_mht (
      total_screened_0_6 counter,
      male_0_6 counter,
      female_0_6 counter,

      total_screened_6_18 counter,
      male_6_18 counter,
      female_6_18 counter,

      total_screened counter,
      total_male counter,
      total_female counter,

      awc_screened counter,
      school_screened counter,
      college_screened counter,
       mht_id varchar,
      year varchar,
      location varchar,
      parent varchar,
      PRIMARY KEY((parent, location, year), mht_id))
      WITH CLUSTERING ORDER BY (mht_id DESC);




CREATE TABLE rbsk_andr_v2.beneficiaryrecord (
    visitationpointtype text,
    reportdate date,
    zone text,
    district text,
    revenuedivision text,
    mandal text,
    village text,
    visitationpointcode text,
    universalid text,
    attendance text,
0
Dear Experts,

we have  back end application running on Cassandra and which has all our transaction data but we have a static data available in mysql
but i would like to know  if I would like to have queries between tables across two databases for the joins how that can be achieved.

Thanks
Roy...
0
Connection Error: No Mongos proxy available.

Environment: Node.js, MongoDB, Mongoose

Start.js
const mongoose = require('mongoose');


mongoose.connect(process.env.DATABASE,  { useNewUrlParser: true })
mongoose.Promise = global.Promise;
mongoose.connection
  .on('connected', () => {
    console.log(`Mongoose connection open on ${process.env.DATABASE}`);
  })
  .on('error', (err) => {
    console.log(`Connection error: ${err.message}`);
  });

.env file

DATABASE=mongodb://username:DBpassword@player-shard-00-00-1cmur.mongodb.net:27017,player-shard-00-01-1cmur.mongodb.net:27017,player-shard-00-02-1cmur.mongodb.net:27017
0
I want to write a init script to stop and start the MongoDB.
0
Hi,
i am currently working on the task of Data Migration from 1 cassandra cluster table to another cassandra cluster table using spark batch job.

i need a spark code for reading data from one cassandra cluster table and to process it in spark and then write results onto another existing cassandra cluster table.
0
Amazon Web Services
LVL 12
Amazon Web Services

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

Hi,
suppose i have a mysql table  with rows (originalUrl : varchar(500) , shortUrl : varchar(10))
The queries that will be executed on this table will mostly be
1. select * from table where shortUrl = X
2. insert into table (originalUrl, shortUrl)

So there should be an index on shortUrl to speed this up.

I have the following question -
1. What exactly the index table will store ?
My understanding is index table will store items like - (shortUrl, pointerToDisk) // where pointerToDisk will locate exactly the place in disk where the row is stored.

2. Where is index table stored ?
Is it always stored in Disk or memory ?

3. What is the size of index table exceeds that of RAM ?
In this case the full index table will never be in RAM and so how will queries like select * from table where shortUrl = x execute
Will a part of index table be pulled out everytime to check the location ?

4. In case where this table is very huge say 3 TB. How big will index table be...

5. If index table is larger than size of RAM and since then the queries will take a lot of time. Is there a better alternative ?? Like using noSQL database. or storing data in two machines splitting them rather than on one machine ?

Thanks
0
{
 "_id" : "Sh",

"Name" : "HR",
"Form" : [
    {
        "Name" : "HR",
        "Permission" : {},
        "Fields" : [
            {
                "OutputFormat" : "Text",
                "Validation" : [],
                "Name" : "PASS",
                "Permission" : {
                    "Hidden" : [
                        "ac"
                    ]
                },
                "IsFormulaArg" : false,
                "MaxCharacters" : 100.0,
                "Label" : "PASS",
                "Widget" : "Dropdown",
                "DefaultPermission" : [],
                "Dropdown" : "me",
                "Score" : 1,
                "PermissionType" : "Hidden",
                "Id" : "me"
            },


            {
                "OutputFormat" : "dd/MM/yyyy",
                "NodeList" : {
                    "Score" : 2,
                    "Type" : "CellReference",
                    "Id" : "jdk2",
                    "Value" : null,
                    "CellMetadata" : {
                        "72uw" : "CreatedAt"
                    }
                },
                "Name" : "Date_Prepared",
                "Permission" : {},
                "IsFormulaArg" : false,
                "Required" : true,
                "Widget" : "Date",
                "Label" : "Date Prepared",
                "Score" : 2,
                "FormulaStr" : "CreatedAt",
                **"Formula" : "True",**
      …
0
So i have a user schema like this:

var user_schema = new Schema({
   username:{type:String,required: true, unique : true, trim: true},
   college:{type:String,required: true},
   password:{type:String,required: true, trim: true},
   email:{type:String,required: true, unique : true, trim: true},
   phone:{type:Number,required: true, unique : true, trim: true},
   dp:{type:String},
   tag:{type:String},
   description:{type:String},
   friends:[{type:String}],
   pending:[{type:String}],
   skills:{type:String},
   bucket:[{type:String}]
  });

Open in new window

and my objective is, to search the all the documents in the collection to get people based on the following conditions:

1. They should not be in the users' "friends" array.
2. They should not be in the users' "pending" array.
3. They should have the same "tag" (a string value) as the user.

So, basically I have to compare the users' fields ("friends","pending" and "tags"), with fields of all documents in the whole collection.

How do I do it, using mongoose (nodejs mongodb library)
0
I have to initialize an array to [](empty array) and then push user chose "tags" into it. This is my schema:

 var user_schema = new Schema({
   username:{type:String,required: true, unique : true, trim: true},
   college:{type:String,required: true},
   password:{type:String,required: true, trim: true},
   email:{type:String,required: true, unique : true, trim: true},
   phone:{type:Number,required: true, unique : true, trim: true},
   dp:{type:String},
   tags:[{type:String}],
   description:{type:String},
   skills:{type:String},
   bucket:[{type:String}]
 }); 

Open in new window

` and i have to initialize "tags" to [] and push user string values (user entered/chosen) every time the request is made to the API call.

I am using this line of code:

 stu_user.findOneAndUpdate(
   { _id: req.decoded.id },
   { $set:{tags:[]},$push: { tags: 'some chosen string'  } },
  function (error, success) {
         if (error) {
            console.log(error);
        } else {
            console.log(success);
        }
    });

Open in new window

But it's not working.
0
How to import collections from mongodb from cli?
0
Hey, This is just a decision-making problem. I am seeking for a well thought out answer.
I have to develop an android XMPP chat application, (the application also has a NodeJS server API, connecting to MongoDB and AWS S3 for picture uploads)

which will be better:

1. Having a "openfire" server on aws and connecting it to the android application and implementing an XMPP client on the android device using "smack" library.

2. Implementing a "xmpp-client" on NodeJS server side and scrape the results from this API to the Android device.
0
I am new to Angular and Firebase. I created an app where I am using email user and pass auth with Firebase auth.  When the user registers with a user and pass, two additional fields are added and populated in Firestore (Display Name and photoURL).  Those are populated by code in the auth.services.ts file.  So, when a user logs in again, it authenticates but then writes the Display name and photoURL again into the FIrestore.  I want to be able to add (or let the user add) the Display name and photo after they are logged in.  Here is what I have so far.  Any thoughts or direction?

FirestoreDB
FirebaseDB
Code in auth.services.ts
Auth Services
0
In MongoDb collection a DateTime field is stored in MST dateTime format for over 1 millions of record for a older record and now new records in utc dateTime format,
  Is there any cost effective query to update the date property in collections for those old recordsin MST dateTime
  (update the registered date format from MST to UTC format i.e. update time + 7:00 hr of each of these record)? We have MongoDb 3.3 version.
0
I'm having difficulty putting a complex(to me) aggregation query together for my data. Basically, here's the run down so everyone understands my conundrum.

I have two collections in this equation. "training_documents" and "users". Each user object has a key identified by name "trainings" which is an array of objects. Each object within this trainings array, contains 4 key/value pairs. An example of each object is below.

    {
    	"document": ObjectId('5a0350ad7df0977d94cffab6'),
    	"trainee": ObjectId('59e51a4b7df0977d94cff95d'),
    	"trainer": ObjectId('595fcc2e04cf707693257890'),
    	"completion_time": "2018-04-23T21:28:22.747Z"
    }

Open in new window


The users trainings array will contain many objects following the aforementioned format.

An example user data structure is below for reference.

   
 [
    	{
    		"_id": "5ad782283c55b056bcc39e3z",
    		"site": {
    			"site_id": "site1",
    			"site_name": "Site One"
    		},
    		"user_name": "jsmith",
    		"first_name": "John",
    		"middle_name": "A",
    		"last_name": "Smith",
    		"full_name": "John Smith",
    		"title": "Duh Boss...",
    		"email": "testuser@example.com",
    		"last_login": "2018-04-27T14:27:27.014Z",
    		"type": "full-time",
    		"active": true,
    		"__v": 0,
    		"badge_id": "000001780343232123",
    		"trainings": [
    			{
    				"document": ObjectId('5ae33622a766885a121b7362'),
    				"trainee": 

Open in new window

0
PMI ACP® Project Management
LVL 12
PMI ACP® Project Management

Prepare for the PMI Agile Certified Practitioner (PMI-ACP)® exam, which formally recognizes your knowledge of agile principles and your skill with agile techniques.

nodeJs and MongoDB
I Would like to  Create a dynamic navbar with mean stack a navbar that the admin can modify it navbar has
Main Categoy
Sub Categories title
sub category
example image https://prnt.sc/j3nxz7
0
Mongo 2.6.12

Not a mongo admin just trying to figure out an issue no one else wants to take on.

Mongo is flushing logs after 24 hours, where is that configuration set or better yet what command can I run against a collection to see the details of log retention?
0
Hi There,

I have a mongodb aggregate query in PHP that is throwing an error and I can't see what I've missed in the syntax.

Versions are
mongodb =  3.6
PHP  = 7.1
mongo PHP driver = 1.3.1-1


 The query in mongo shell works and looks like this:

db.products.aggregate(
	[
		{
			$match: {
			    download_Date : {'$gte' : 20180205 }
			}},
		{
			$sort: {
			download_Date: 1 }
		},
		{
			$group: {
			 _id: "$cw_product_id", batch: { $last : "$download_Date" }
	        }},
	]

Open in new window


My code in PHP fails and  looks like

$command = new MongoDB\Driver\Command([
    'aggregate' => 'products',
    'pipeline' => [    
        ['$match' =>  [ 'download_Date' => ['$gte' => $batch ]]],
        ['$sort'  => [  
            'download_Date' => 1
        ]],
    ['$group' => [ 
        '_id' => [ '$cw_product_id', 'batch' => ['$last' => '$download_Date'] ],
        ]],   
    ],
        'allowDiskUse' => true, 
        'cursor' => new stdClass, 
    ]);

Open in new window


The error I receive when running the php script is

           
PHP Fatal error:  Uncaught MongoDB\Driver\Exception\RuntimeException: Unrecognized expression '$last'

Open in new window

I've pulled it apart and put it back together so many times, a fresh pair of eyes is needed to spot the obvious.  

Thanks in advance,

Rick
0
Hi,

I have a text file which has multiple json rows in it.

What I need to do is the following:-

1. Read thru the file
2. Select only certain elements not all, remap to a new name for example AuthenticationId could be AuthId
3. Ingest in to MongoDB as JSON

How can this be achieved in Python?

{"AuthenticationId":"997","CommandLine":"C:\\Windows\\system32\\wbem\\wmiprvse.exe -secured -Embedding","ConfigBuild":"1007.3.0005907.1","ConfigStateHash":"3163607488","EffectiveTransmissionClass":"3","Entitlements":"15","ImageFileName":"\\Device\\HarddiskVolume3\\Windows\\System32\\wbem\\WmiPrvSE.exe","ImageSubsystem":"2","IntegrityLevel":"16384","MD5HashData":"1df2fc82b861bc9612657d1661e9ae33","ParentAuthenticationId":"997","ParentProcessId":"1627804482161","ProcessCreateFlags":"16","ProcessEndTime":"","ProcessParameterFlags":"24577","ProcessStartTime":"1512962460","ProcessSxsFlags":"64","RawProcessId":"9276","SHA1HashData":"1aa3fda50123dd14a055b4d6601beedead69fe11","SHA256HashData":"835f2a94e47830b06654e484bf7a1cc0b9882f579716dca198e32d22218a07e5","SourceProcessId":"1627804482161","SourceThreadId":"39913451443946","TargetProcessId":"1756187997441","TokenType":"2","UserSid":"S-1-5-19","aid":"2bc82f8878df4b9f7712273e755a93be","aip":"105.255.135.108","cid":"99cdff8f89af458d858c3d6b3e312e11","event_platform":"Win","event_simpleName":"ProcessRollup2","id":"4ff73eae-de22-11e7-a0dd-06e913674db2","name":"ProcessRollup2V6","timestamp":"1512962460994"}

Open in new window



Open in new window

0
MongoDB v2.6.12 keeps losing my user credentials on a database. I'm running Iquidus Explorer and thus need to be running v2.6.x of MongoDB for it to work. Most of the time it works fine, but every now and again my script disconnects and gets an error that the user credentials are no longer correct.

To fix this I have to run mongo and type use database1 and then db.createUser( { user: "username", pwd: "blahblah", roles: [ "readWrite" ] } )

This is far from ideal. Can anyone shed any light on what's going on and how to prevent the credentials from getting screwed up?

I'm running MongoDB on CentOS 7 64-bit.
0
I've been thinking to migrate my erp(java currently) from rdbms(firebird currently) to nosql (mongoDB likely) and i trying to antecipate some issues. I'm trying to make a "failproof" inventory control which never let a qty's item be smaller than 0. Offcourse i still get "ACID bias" which i need to left in order to complete this task. The goal is update inventory qty of all order's items when approve the order. For that, 2 problems can occur which currently are resolved with a trigger.

1.problem: order gets 2 items (ball = qty 2 and chair = qty 1) and there is no enough qty in inventory (ball = qty 1 and chair = qty 0) to complete the order. Once i trigger the command do complete the order, a loop is executed to decrease qty in inventory and fails on the second item cause there is no enough qty. Consequently rollback operation is trigged and everybody go home happy.

2.problem: in one word, concurrency. suppose there is no trigger controling this operation but only "selects and if". User 1 query inventory qtys, see enough qty (ball = 2, chair = 1) e system allow complete the order. While the transaction is running user 2 query same invetory qtys but because first transaction is not complete yet second query see "old" qty (ball = 2, chair = 1) e system also lets complete order. Consequently, 1 qty of chair is descreased twice, but shold be descreased only once and fail on the second. Result: i got negative qty.

I saw some workarrounds with inventory reservation but i …
0

NoSQL Databases

146

Solutions

293

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Top Experts In
NoSQL Databases
<
Monthly
>